By Topic

An actor-critic method using Least Squares Temporal Difference learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Paschalidis, I.C. ; Dept. of Electr. & Comput. Eng., Boston Univ., Brookline, MA, USA ; Keyong Li ; Estanjini, R.M.

In this paper, we use a Least Squares Temporal Difference (LSTD) algorithm in an actor-critic framework where the actor and the critic operate concurrently. That is, instead of learning the value function or policy gradient of a fixed policy, the critic carries out its learning on one sample path while the policy is slowly varying. Convergence of such a process has previously been proven for the first order TD algorithms, TD(¿) and TD(1). However, the conversion to the more powerful LSTD turns out not straightforward, because some conditions on the stepsize sequences must be modified for the LSTD case. We propose a solution and prove the convergence of the process. Furthermore, we apply the LSTD actor-critic to an application of intelligently dispatching forklifts in a warehouse.

Published in:

Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on

Date of Conference:

15-18 Dec. 2009