Skip to Main Content
Multi-agent reinforcement learning(RL) problems can in principle be solved by treating the joint actions of the agents as single actions and applying single-agent Q-learning. However, the number of joint actions is exponential in the number of agents, rendering this approach infeasible for most problems. We investigate a sparse cooperative of the Q-function based on vector potential field by only considering the joint actions in those states in which coordination is actually required in this paper. In all other states single-agent Q-learning is applied. This offers a compact state-action value representation, without compromising much in terms of solution quality. We distinguish the coordinated state by vector potential field. We have performed experiments in RoboCup simulation-2D and compared our algorithm to other multi-agent reinforcement learning algorithms with promising results.
Intelligent Systems, 2009. GCIS '09. WRI Global Congress on (Volume:1 )
Date of Conference: 19-21 May 2009