By Topic

Evolutionary value function approximation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Davarynejad, M. ; Fac. of Technol., Policy & Manage., Delft Univ. of Technol., Delft, Netherlands ; van Ast, J. ; Vrancken, J. ; van den Berg, J.

The standard reinforcement learning algorithms have proven to be effective tools for letting an agent learn from its experiences generated by its interaction with an environment. In this paper an evolutionary approach is proposed to accelerate learning speed in tabular reinforcement learning algorithms. In the proposed approach, in order to accelerate the learning speed of agents, the state-value is not only approximated, but through using the concept of evolutionary algorithms, they are evolved, with extra bonus of giving each agent the opportunity to exchange its knowledge. The proposed evolutionary value function approximation, helps in moving from a single isolated learning stage to cooperative exploration of the search space and accelerating learning speed. The performance of the proposed algorithm is compared with the standard SARSA algorithm and some of its properties are discussed. The experimental analysis confirms that the proposed approach has higher convergent speed with a negligible increase in computational complexity.

Published in:

Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2011 IEEE Symposium on

Date of Conference:

11-15 April 2011