By Topic

A novel hybrid learning technique applied to a self-learning multi-robot system

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Desouky, S.F. ; Dept. of Syst. & Comput. Eng., Carleton Univ., Ottawa, ON, Canada ; Schwartz, H.M.

This paper mainly discusses learning in pursuit-evasion game. In the pursuit-evasion model, one robot pursues another one in a partially known environment. Partially known environment means that each robot knows the instant position of the other robot but at the same time none of them knows its control strategy. Therefore, both robots have to self-learn their control strategies on-line by interaction with each other. A new hybrid learning technique is proposed. The proposed technique combines reinforcement learning with both a fuzzy controller and genetic algorithms in a two-phase structure. The proposed technique is called a Q(¿)-learning based genetic fuzzy controller (QLBGFC). To test the performance of our proposed technique, it is compared with the optimal strategy, the Q(¿)-learning, and the reward-based genetic algorithms. Computer simulations show the usefulness of the proposed technique. In addition, the convergence and the boundedness of the Q-learning algorithm used in the proposed technique are shown.

Published in:

Systems, Man and Cybernetics, 2009. SMC 2009. IEEE International Conference on

Date of Conference:

11-14 Oct. 2009