Skip to Main Content
In this paper, we propose a distributed architecture for reinforcement learning in a multi-agent environment, where agents share information learned via a distributed network. Here we propose a hybrid master/slave and peer-to-peer system architecture, where a master node effectively assigns a work load (a portion of the terrain) to each node. However, this master node also manages communications between all the other system nodes, and in that sense it is a peer-to-peer architecture. It is a loosely-coupled system in that node slaves only know about the existence of the master node, and are only concerned with their work load (portion of the terrain). As part of this architecture, we show how agents are allowed to communicate with other agents in the same or different nodes and share information that pertains to all agents, including the agent obstacle barriers. In particular, one main contribution of the paper is multi-agent reenforcement learning in a distributed system, where the agents do not have complete knowledge and information of their environment, other than what is available on the computing node, the particular agent (s) is (are) running on. We show how agents, running on same or different nodes, coordinate the sharing of their respective environment states/information to collaboratively perform their respective tasks.