By Topic

Evolving internal reinforcers for an intrinsically motivated reinforcement-learning robot

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Schembri, M. ; Consiglio Nazionale delle Ricerche, Rome ; Mirolli, M. ; Baldassarre, G.

Intrinsically motivated reinforcement learning (IMRL) has been proposed as a framework within which agents exploit "internal reinforcement" to acquire general-purpose building-block behaviors ("skills") which can be later combined for solving several specific tasks. The architectures so far proposed within this framework are limited in that: (1) they use hardwired "salient events" to form and train skills, and this limits agents' autonomy; (2) they are applicable only to problems with abstract states and actions, as grid-world problems. This paper proposes solutions to these problems in the form of a hierarchical reinforcement-learning architecture that: (1) exploits evolutionary robotics techniques so to allow the system to autonomously discover "salient events"; (2) uses neural networks so to allow the system to cope with continuous states and noisy environments. The viability of the proposed approach is demonstrated with a simulated robotic scenario.

Published in:

Development and Learning, 2007. ICDL 2007. IEEE 6th International Conference on

Date of Conference:

11-13 July 2007