By Topic

Legs that can walk: embodiment-based modular reinforcement learning applied

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
D. Jacob ; Adaptive Syst. Res. Group, Hertfordshire Univ., Hatfield, UK ; D. Polani ; C. L. Nehaniv

Experiments to illustrate a novel methodology for reinforcement learning in embodied physical agents are described. A simulated legged robot is decomposed into structure-based modules following the authors' EMBER principles of local sensing, action and learning. The legs are individually trained to 'walk' in isolation, and re-attached to the robot; walking is then sufficiently stable that learning in situ can continue. The experiments demonstrate the benefits of the modular decomposition: state-space factorisation leads to faster learning, in this case to the extent that an otherwise intractable problem becomes learnable.

Published in:

2005 International Symposium on Computational Intelligence in Robotics and Automation

Date of Conference:

27-30 June 2005