By Topic

Dynamic fuzzy Q-learning and control of mobile robots

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Deng, C. ; Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore ; Er, M.J. ; Xu, J.

In this paper, a dynamic fuzzy Q-learning (DFQL) method navigating a mobile robot efficiently is presented. Self-organizing fuzzy inference is introduced to calculate actions and Q-functions which capable of enabling us to deal with continuous-valued states and actions. Consequently, fuzzy rules can be generated automatically. Fuzzy inference systems provide a natural mean of incorporating the bias components for rapid reinforcement learning. Furthermore, the eligibility trace method is employed in our algorithm, leading to faster learning and alleviating the experimentation-sensitive problem where an arbitrarily bad training policy might result in a non-optimal policy. Experimental results demonstrate that the robot is able to learn the right policy with a few trials.

Published in:

Control, Automation, Robotics and Vision Conference, 2004. ICARCV 2004 8th  (Volume:3 )

Date of Conference:

6-9 Dec. 2004