By Topic

Cooperative strategy based on adaptive Q-learning for robot soccer systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Kao-Shing Hwang ; Electr. Eng. Dept., Nat. Chung Cheng Univ., Chia-Yi, Taiwan ; Shun-Wen Tan ; Chien-Cheng Chen

The objective of this paper is to develop a self-learning cooperative strategy for robot soccer systems. The strategy enables robots to cooperate and coordinate with each other to achieve the objectives of offense and defense. Through the mechanism of learning, the robots can learn from experiences in either successes or failures, and utilize these experiences to improve the performance gradually. The cooperative strategy is built using a hierarchical architecture. The first layer of the structure is responsible for assigning each role, that is, how many defenders and sidekicks should be played according to the positional states. The second layer is for the role assignment related to the decision from the previous layer. We develop two algorithms for assignment of the roles, the attacker, the defenders, and the sidekicks. The last layer is the behavior layer in which robots execute their behavior commands and tasks based on their roles. The attacker is responsible for chasing the ball and attacking. The sidekicks are responsible for finding good positions, and the defenders are responsible for defending competitor scoring. The robots' roles are not fixed. They can dynamically exchange their roles with each other. In the aspect of learning, we develop an adaptive Q-learning method which is modified form the traditional Q-learning. A simple ant experiment shows that Q-learning is more effective than the traditional techniques, and it is also successfully applied to the learning of the cooperative strategy.

Published in:

IEEE Transactions on Fuzzy Systems  (Volume:12 ,  Issue: 4 )