Skip to Main Content
The estimation of a camera's egomotion is a highly desireable goal in many different application fields such as augmented reality (AR), visual navigation, robotics or entertainment. Especially for real-time modeling the former estimation of the camera trajectory is an elementary step towards the generation of three dimensional scene models. This paper presents a framework for simultaneous recovery of scene structure and camera motion by combining visual and inertial cues. For this purpose two different system designs are proposed: a loosely-coupled system and a monolithic design, which adapts ideas from non-linear state estimation as extended Kalman filtering (EKF) for structure and motion recovery.