Skip to Main Content
The context of this work is vision-based motion planning for humanoid robots in an unknown environment. We present an efficient combination of on-line 3D environment modeling and motion planning methods for humanoid robots (e.g whole body motion and walking ) in non-static environment. To construct the model of the environment, we rely on 3D occupancy grid that is updated incrementally by stereo vision. As the dimension of configuration space is high for humanoid robots, a roadmap based method is used for motion planning. However, as the environment is not static, it is necessary to update the roadmap after receiving new visual information. In other words, the nodes and edges which are in collision, based on the new update of the environment, must be erased. Moreover, preliminary steps are necessary for considering the environment as a non-static model. As we construct the model incrementally by vision, several thousands points would appear in each update of the model. Therefore, updating the roadmap raises algorithmic complexity issues. Our approach is an extension of a recent idea to cope with the problem. After presenting the approach, we implement our method by planning a collision-free motion in a 3D occupancy grid model generated by HRP2 based on stereo vision. As the robot navigates in the environment, it receives updated information through its on-board cameras and refreshes the 3D model of the environment incrementally. Conventionally, the 3D model can be composed of up to millions of voxel. If the statuses of some voxels change, our method uses these changes to update the last roadmap locally. This updated roadmap is then reused for further motion planning. We evaluate our algorithm by measuring processing time and memory usage in each step and compare them with its descendant.