By Topic

Learning Generalizable Control Programs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Hart, S. ; Italian Inst. of Technol., Genoa, Italy ; Grupen, R.

In this paper, we present a framework for guiding autonomous learning in robot systems. The paradigm we introduce allows a robot to acquire new skills according to an intrinsic motivation function that finds behavioral affordances. Affordances-in the sense of (Gibson, Toward and Ecological Psychology, Hillsdale, NJ, 1977)-describe the latent possibilities for action in the environment and provide a direct means of organizing functional knowledge in embodied systems. We begin by showing how a robot can assemble closed-loop action primitives from its sensory and motor resources, and then show how these primitives can be sequenced into multi-objective policies. We then show how these policies can be assembled hierarchically to support incremental and cumulative learning. The main contribution of this paper demonstrates how the proposed intrinsic motivator for affordance discovery can cause a robot to both acquire such hierarchical policies using reinforcement learning and then to generalize these policies to new contexts. As the framework is described, its effectiveness and applicability is demonstrated through a longitudinal learning experiment on a bimanual robot.

Published in:

Autonomous Mental Development, IEEE Transactions on  (Volume:3 ,  Issue: 3 )