By Topic

Modified Q-learning method with fuzzy state division and adaptive rewards

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Maeda, Y. ; Fac. of Inf. Sci. & Arts, Osaka Electro-Commun. Univ., Japan

Reinforcement learning method can be considered as an adaptive learning method for autonomous agents. It is important to balance between searching behavior of the unknown knowledge and using behavior of the obtained knowledge. However, the learning is not always efficient in every searching stage because of constant learning parameters in the ordinary Q-learning. For this problem, we have already proposed an adaptive Q-learning method with learning parameters tuned by fuzzy rules. Furthermore, it is hard to deal with the continuous states and behaviors in the ordinary reinforcement learning method. It is also difficult to learn the problem with multiple purposes. Therefore, in this research, we propose a modified Q-learning method where the reward values are tuned according to its state and can deal with multiple purposes in the continuous state space by using fuzzy reasoning. We also report some results for the simulation of object chase agents by using this method

Published in:

Fuzzy Systems, 2002. FUZZ-IEEE'02. Proceedings of the 2002 IEEE International Conference on  (Volume:2 )

Date of Conference:

2002