There is currently an issue with the citation download feature. Learn more

Dynamic Action Sequences in Reinforcement Learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$15 $15
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)

Reinforcement learning is a popular method for learning in autonomous dynamical systems. One of the most popular reinforcement learning methods is Qlearning, where the evaluation function and action selection function is combined in one data structure. However, Q-learning suffers from poor scalability and slow convergence, problems typically addressed by clustering of states or by using a hierarchical action system. Hierarchical Q-learning, presented in this paper, provides a simple mechanism for dynamical creation of hierarchical action sequences, that solves the scalability problems of regular Q-learning, while retaining its simplicity. By creating dynamical action sequences and using them to generalize over the statespace, the model is able to increase the learning speed without prior assumptions about the structure of the state-space.