Skip to Main Content
While a Global Positioning System (GPS) is the most widely used sensor modality for aircraft navigation, researchers have been motivated to investigate other navigational sensor modalities because of the desire to operate in GPS denied environments. Due to advances in computer vision and control theory, monocular camera systems have received growing interest as an alternative/collaborative sensor to GPS systems. Cameras can act as navigational sensors by detecting and tracking feature points in an image. Current methods have a limited ability to relate feature points as they enter and leave the camera field of view (FOV). A vision-based position and orientation estimation method for aircraft navigation and control is described. This estimation method accounts for a limited camera FOV by releasing tracked features that are about to leave the FOV and tracking new features. At each time instant that new features are selected for tracking, the previous pose estimate is updated. The vision-based estimation scheme can provide input directly to the vehicle guidance system and autopilot. Simulations are performed wherein the vision-based pose estimation is integrated with a nonlinear flight model of an aircraft. Experimental verification of the pose estimation is performed using the modelled aircraft.