By Topic

Adaptive stepsize selection for online Q-learning in a non-stationary environment

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Levy, K. ; Dept. of Math. & Stat., Melbourne Univ., Vic. ; Vazquez-Abad, F.J. ; Costa, A.

We consider the problem of real-time control of a discrete-time Markov decision process (MDP) in a non-stationary environment, which is characterized by large, sudden changes in the parameters of the MDP. We consider here an online version of the well-known Q-learning algorithm, which operates directly in its target environment. In order to track changes, the stepsizes (or learning rates) must be bounded away from zero. In this paper, we show how the theory of constant stepsize stochastic approximation algorithms can be used to motivate and develop an adaptive stepsize algorithm, that is appropriate for the online learning scenario described above. Our algorithm automatically achieves a desirable balance between accuracy and rate of reaction, and seeks to track the optimal policy with some pre-determined level of confidence

Published in:

Discrete Event Systems, 2006 8th International Workshop on

Date of Conference:

10-12 July 2006