Loading [MathJax]/extensions/MathMenu.js
Attentive Relational State Representation in Decentralized Multiagent Reinforcement Learning | IEEE Journals & Magazine | IEEE Xplore

Attentive Relational State Representation in Decentralized Multiagent Reinforcement Learning


Abstract:

In multiagent reinforcement learning (MARL), it is crucial for each agent to model the relation with its neighbors. Existing approaches usually resort to concatenate the ...Show More

Abstract:

In multiagent reinforcement learning (MARL), it is crucial for each agent to model the relation with its neighbors. Existing approaches usually resort to concatenate the features of multiple neighbors, fixing the size and the identity of the inputs. But these settings are inflexible and unscalable. In this article, we propose an attentive relational encoder (ARE), which is a novel scalable feedforward neural module, to attentionally aggregate an arbitrary-sized neighboring feature set for state representation in the decentralized MARL. The ARE actively selects the relevant information from the neighboring agents and is permutation invariant, computationally efficient, and flexible to interactive multiagent systems. Our method consistently outperforms the latest competing decentralized MARL methods in several multiagent tasks. In particular, it shows strong cooperative performance in challenging StarCraft micromanagement tasks and achieves over a 96% winning rate against the most difficult noncheating built-in artificial intelligence bots.
Published in: IEEE Transactions on Cybernetics ( Volume: 52, Issue: 1, January 2022)
Page(s): 252 - 264
Date of Publication: 27 March 2020

ISSN Information:

PubMed ID: 32224477

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.