Skip to Main Content
Panoramic images, which are often produced off-line a while after the photos are taken, might be an entertaining application if they were generated on mobile devices immediately. Since producing a panorama requires feature extraction and matching that are expensive for mobile devices, an embedded panoramic system needs different approaches. Therefore, for on-line mobile panoramas, we propose methods that estimate camera motion during photographing and infer alignment based on motion so as to minimize computation time. A semiautomatic photographing method uses a guided user interface by showing a projected plane of the previously taken image and forces users to move the camera to the desired location. An automatic method keeps tracking the camera while the camera moves, estimates infinitesimal camera motions between neighboring frames, accumulates motion fields, and therefore infers correspondences of faraway frames. The resulting motion information can be used for initializing, aligning, and stitching to produce a panorama on a pre-computed common plane. However, incorrect pivoting, lens fall-off, or hand trembling may cause errors in stitching. So we apply a greedy local search algorithm to correct these errors. The complexity of the whole system is linear to the size of the images, and the running time is about 7 sec. for photographing and stitching four QVGA images on mobile devices.