Skip to Main Content
This paper addresses the problem of long term mobile robot localization in large urban environments using a partial apriori knowledge made by different kind of images. Typically, GPS is the preferred sensor for outdoor operation. However, using GPS-only localization methods leads to significant performance degradation in urban areas where tall nearby structures obstruct the view of the satellites. In our work, we use omnidirectional vision-based sensors to complement GPS and odometry and provide accurate localization.We also present some novel Monte Carlo Localization optimizations and we introduce the concept of online knowledge acquisition and integration presenting a framework able to perform long term robot localization in real environments. The vision system identifies prominent features in the scene and matches them with a database of geo-referenced features already known (with a partial coverage of the environment and using both directional and omnidirectional images and with different resolutions) or learned and integrated during the localization process (omnidirectional images only). Results of successful robot localization in the old town of Fermo are presented. The whole architecture behaves well also in long term experiments, showing a suitable and good system for real life robot applications with a particular focus on the integration of different knowledge sources.
Date of Conference: 15-17 July 2010