Skip to Main Content
We present a novel vision-based approach to simultaneous localization and mapping (SLAM). We discuss it in the context of estimating the 6 DoF pose of a mobile robot from the perception of a monocular camera using a minimum set of three natural landmarks. In contrast to our previously presented V-GPS system, which navigates based on a set of known landmarks, the current approach allows to estimate the required information about the landmarks on-the-fly during the exploration of an unknown environment The method is applicable to indoor and outdoor environments. The calculation is done from the image position of a set of natural landmarks that are tracked in a continuous video stream at frame-rate. An automatic hand-off process allows an update of the set to compensate for occlusions and decreasing reconstruction accuracies with the distance to an imaged landmark. A generic sensor model allows a system configuration with a variety of physical sensors including: monocular perspective cameras, omni-directional cameras and laser range finders.