By Topic

Grounding subgoals in information transitions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
van Dijk, S.G. ; Adaptive Syst. Res. Group, Univ. of Hertfordshire, Hatfield, UK ; Polani, D.

In reinforcement learning problems, the construction of subgoals has been identified as an important step to speed up learning and to enable skill transfer. For this purpose, one typically extracts states from various saliency properties of an MDP transition graph, most notably bottleneck states. Here we introduce an alternative approach to this problem: assuming a family of MDPs with multiple goals but with a fixed transition graph, we introduce the relevant goal information as the amount of Shannon information that the agent needs to maintain about the current goal at a given state to select the appropriate action. We show that there are distinct transition states in the MDP at which new relevant goal information has to be considered for selecting the next action. We argue that these transition states can be interpreted as subgoals for the current task class, and we use these states to automatically create a hierarchical policy, according to the well-established Options model for hierarchical reinforcement learning.

Published in:

Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2011 IEEE Symposium on

Date of Conference:

11-15 April 2011