By Topic

Reinforcement learning for balancer embedded humanoid locomotion

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Yamaguchi, A. ; Grad. Sch. of Inf. Sci., Nara Inst. of Sci. & Technol., Ikoma, Japan ; Hyon, S. ; Ogasawara, T.

Reinforcement learning (RL) applications in robotics are of great interest because of their wide applicability, however many RL applications suffer from large learning costs. We study a new learning-walking scheme where a humanoid robot is embedded with a primitive balancing controller for safety. In this paper, we investigate some RL methods for the walking task. The system has two modes: double stance and single stance, and the selectable action spaces (sub-action spaces) change according to the mode. Thus, a hierarchical RL and a function approximator (FA) approaches are compared in simulation. To handle the sub-action spaces, we introduce the structured FA. The results demonstrate that non-hierarchical RL algorithms with the structured FA is much faster than the hierarchical RL algorithm. The robot can obtain appropriate walking gaits in around 30 episodes (20~30 min), which is considered to be applicable to a real humanoid robot.

Published in:

Humanoid Robots (Humanoids), 2010 10th IEEE-RAS International Conference on

Date of Conference:

6-8 Dec. 2010