Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

On-line learning of a feedback controller for quasi-passive-dynamic walking by a stochastic policy gradient method

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Hitomi, K. ; Nara Inst. of Sci. & Technol., Japan ; Shibata, T. ; Nakamura, Y. ; Ishii, S.

A class of biped locomotion called passive dynamic walking (PDW) has been recognized to be efficient in energy consumption and a key to understand human walking. Although PDW is sensitive to the initial condition and disturbances, some studies of quasi-PDW, which introduces supplementary actuators, are reported to overcome the sensitivity. In this article, for realization of the quasi-PDW, an on-line learning scheme of a feedback controller based on a policy gradient reinforcement learning method is proposed. Computer simulations show that the parameter in a quasi-PDW controller is automatically tuned by our method utilizing the passivity of the robot dynamics. The obtained controller is robust against variations in the slope gradient to some extent.

Published in:

Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on

Date of Conference:

2-6 Aug. 2005