By Topic

Motion planning for humanoid robots in environments modeled by vision

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Nakhaei, A. ; CNRS-LAAS, Univ. of Toulouse, Toulouse ; Lamiraux, F.

The context of this work is vision-based motion planning for humanoid robots in an unknown environment. We present an efficient combination of on-line 3D environment modeling and motion planning methods for humanoid robots (e.g whole body motion and walking ) in non-static environment. To construct the model of the environment, we rely on 3D occupancy grid that is updated incrementally by stereo vision. As the dimension of configuration space is high for humanoid robots, a roadmap based method is used for motion planning. However, as the environment is not static, it is necessary to update the roadmap after receiving new visual information. In other words, the nodes and edges which are in collision, based on the new update of the environment, must be erased. Moreover, preliminary steps are necessary for considering the environment as a non-static model. As we construct the model incrementally by vision, several thousands points would appear in each update of the model. Therefore, updating the roadmap raises algorithmic complexity issues. Our approach is an extension of a recent idea to cope with the problem. After presenting the approach, we implement our method by planning a collision-free motion in a 3D occupancy grid model generated by HRP2 based on stereo vision. As the robot navigates in the environment, it receives updated information through its on-board cameras and refreshes the 3D model of the environment incrementally. Conventionally, the 3D model can be composed of up to millions of voxel. If the statuses of some voxels change, our method uses these changes to update the last roadmap locally. This updated roadmap is then reused for further motion planning. We evaluate our algorithm by measuring processing time and memory usage in each step and compare them with its descendant.

Published in:

Humanoid Robots, 2008. Humanoids 2008. 8th IEEE-RAS International Conference on

Date of Conference:

1-3 Dec. 2008