By Topic

Mobile robot self-location using model-image feature correspondence

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Talluri, R. ; Syst. & Inf. Sci. Lab., Texas Instrum. Inc., Dallas, TX, USA ; Aggarwal, J.K.

The problem of establishing reliable and accurate correspondence between a stored 3-D model and a 2-D image of it is important in many computer vision tasks, including model-based object recognition, autonomous navigation, pose estimation, airborne surveillance, and reconnaissance. This paper presents an approach to solving this problem in the context of autonomous navigation of a mobile robot in an outdoor urban, man-made environment. The robot's environment is assumed consist of polyhedral buildings. The 3-D descriptions of the lines constituting the buildings' rooftops is assumed to be given as the world model. The robot's position and pose are estimated by establishing correspondence between the straight line features extracted from the images acquired by the robot and the model features. The correspondence problem is formulated as a two-stage constrained search problem. Geometric visibility constraints are used to reduce the search space of possible model-image feature correspondences. Techniques for effectively deriving and capturing these visibility constraints from the given world model are presented. The position estimation technique presented is shown to be robust and accurate even in the presence of errors in the feature detection, incomplete model description, and occlusions. Experimental results of testing this approach using a model of an airport scene are presented

Published in:

Robotics and Automation, IEEE Transactions on  (Volume:12 ,  Issue: 1 )