By Topic

A Policy Grad Grad Grad Grad ient Reinforcement Learning Algorithm with Fuzzy Function Approximation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Dongbing Gu ; Department of Computer Science, University of Essex Wivenhoe Park Colchester CO4 SQ UK (e mail: gu@essex ac uk ; Erfu Yang

For complex systems, reinforcement learning has to be generalised from a discrete form to a continuous form due to large state or action spaces. In this paper, the generalisation of reinforcement learning to continuous state space is investigated by using a policy gradient approach. Fuzzy logic is used as a function approximation in the generalisation. To guarantee learning convergence, a policy approximator and a state action value approximator are employed for the reinforcement learning. Both of them are based on fuzzy logic. The convergence of the learning algorithm is justified

Published in:

Robotics and Biomimetics, 2004. ROBIO 2004. IEEE International Conference on

Date of Conference:

22-26 Aug. 2004