Skip to Main Content
More and more artificial intelligence researchers focused on the reinforcement learning (RL)-based multi-agent system (MAS). Multi-agent learning problems can in principle be solved by treating the joint actions of the agents as single actions and applying single-agent Q-learning. However, the number of joint actions is exponential in the number of agents, rendering this approach infeasible for most problems. In this paper we investigate a regional cooperative of the Q-function based on potential field by only considering the joint actions in those states in which coordination is actually required. In all other states single-agent Q-learning is applied. This offers a compact state-action value representation, without compromising much in terms of solution quality. We have performed experiments in RoboCup simulation-2D which is the ideal testing platform of Multi-agent systems and compared our algorithm to other multi-agent reinforcement learning algorithms with promising results.