By Topic

Topological Mapping and Scene Recognition With Lightweight Color Descriptors for an Omnidirectional Camera

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Ming Liu ; Autonomous Syst. Lab., ETH Zurich, Zurich, Switzerland ; Siegwart, R.

Scene recognition problems for mobile robots have been extensively studied. This is important for tasks such as visual topological mapping. Usually, sophisticated key-point-based descriptors are used, which can be computationally expensive. In this paper, we describe a lightweight novel scene recognition method using an adaptive descriptor, which is based on color features and geometric information that are extracted from an uncalibrated omnidirectional camera. The proposed method enables a mobile robot to perform online registration of new scenes onto a topological representation automatically and solve the localization problem to topological regions simultaneously, all in real time. We adopt a Dirichlet process mixture model (DPMM) to describe the online inference process. It is based on an approximation of conditional probabilities of the new measurements given incrementally estimated reference models. It enables online inference speeds of up to 50 Hz for a normal CPU. We compare it with state-of-the-art key-point descriptors and show the advantage of the proposed algorithm in terms of performance and computational efficiency. A real-world experiment is carried out with a mobile robot equipped with an omnidirectional camera. Finally, we show the results on extended datasets.

Published in:

Robotics, IEEE Transactions on  (Volume:30 ,  Issue: 2 )