Skip to Main Content
A new method for vision-aided navigation based on three-view geometry is presented. The main goal of the proposed method is to provide position estimation in GPS-denied environments for vehicles equipped with a standard inertial navigation system (INS) and a single camera only, without using any a priori information. Images taken along the trajectory are stored and associated with partial navigation data. By using sets of three overlapping images and the concomitant navigation data, constraints relating the motion between the time instances of the three images are developed. These constraints include, in addition to the well-known epipolar constraints, a new constraint related to the three-view geometry of a general scene. The scale ambiguity, inherent to pure computer vision-based motion estimation techniques, is resolved by utilizing the navigation data attached to each image. The developed constraints are fused with an INS using an implicit extended Kalman filter. The new method reduces position errors in all axes to the levels present while the first two images were captured. Navigation errors in other parameters are also reduced, including velocity errors in all axes. Reduced computational resources are required compared with bundle adjustment and simultaneous localization and mapping (SLAM). The proposed method was experimentally validated using real navigation and imagery data. A statistical study based on simulated navigation and synthetic images is presented as well.