By Topic

Localization in indoor environments by querying omnidirectional visual maps using perspective images

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Lourenc╠žo, M. ; Inst. of Syst. & Robot., Univ. of Coimbra, Coimbra, Portugal ; Pedro, V. ; Barreto, J.P.

This article addresses the problem of imagebased localization in indoor environments. The localization is achieved by querying a database of omnidirectional images that constitutes a detailed visual map of the building where the robot operates. Omnidirectional cameras have the advantage, when compared to standard perspectives, of capturing in a single frame the entire visual content of a room. This, not only speeds up the process of acquiring data for creating the map, but also favors scalability by significantly decreasing the size of the database. The problem is that omnidirectional images have strong non-linear distortion, which leads to poor retrieval results when the query images are standard perspectives. This paper reports for the first time thorough experiments in using perspectives to index a database of para-catadioptric images for the purpose of robot localization. We propose modifications to the SIFT algorithm that significantly improve point matching between the two types of images with positive impact in the recognition based in visual words. We also compare the classical bags-of-words against the recent framework of visual-phrases, showing that the latter outperforms the former.

Published in:

Robotics and Automation (ICRA), 2012 IEEE International Conference on

Date of Conference:

14-18 May 2012