Skip to Main Content
Inertial and vision sensors are considered essential nowadays in terms of navigation and guidance measurements for autonomous aerial and ground vehicles. In this paper, the concept of aiding Inertial Navigation with Vision-Based Simultaneous Localization and Mapping to compensate for Inertial Navigation divergence is introduced. We describe the changes to the augmented state vector that this sensor fusion algorithm requires and show that repeated measurements of map points during certain maneuvers around or nearby a map point is crucial for constraining the Inertial Navigation position divergence and reducing the covariance of the map point position estimates. In addition, it is shown that such an integrated navigation system requires coordination between the guidance and control measurements and the vehicle task itself to achieve better navigation accuracy.