By Topic

Towards autonomous object reconstruction for visual search by the humanoid robot HRP-2

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

8 Author(s)

This paper deals with the problem of object reconstruction for visual search by a humanoid robot. Three problems necessary to achieve the behavior autonomously are considered: full-body motion generation according to a camera pose, general object representation for visual recognition and pose estimation, and far-away visual detection of an object. First we deal with the problem of generating full body motion for a HRP-2 humanoid robot to achieve camera pose given by a Next Best View algorithm. We use an optimization based approach including self-collision avoidance. This is made possible by a body to body distance function having a continuous gradient. The second problem has received a lot of attention for several decades, and we present a solution based on 3D vision together with SIFTs descriptor, making use of the information available from the robot. It is shown in this paper that one of the major limitation of this model is the perception distance. Thus a new approach based on a generative object model is presented to cope with more difficult situations. It relies on a local representation which allows handling occlusion as well as large scale and pose variations.

Published in:

Humanoid Robots, 2007 7th IEEE-RAS International Conference on

Date of Conference:

Nov. 29 2007-Dec. 1 2007