Skip to Main Content
Multiple sensor fusion for robot pose estimation has attracted a lot of interest in recent years. Monte Carlo Localization (MCL) is a common method for self-localization of a mobile robot under the assumption that a map of the environment is available. In this paper we first compare pure vision-based with sonar-based MCL approaches in terms of localization accuracy, and then we show how the fusion of vision and range measurements improves the overall accuracy. Experiments were performed in an environment with high perceptual aliasing like our department corridors. They demonstrated that fusing simple and computationally inexpensive sensory information, coming from omnidirectional cameras and sonar sensors, can allow a mobile robot to precisely locate itself.