Multi-agent Robust Time Differential Reinforcement Learning Over Communicated Networks | IEEE Conference Publication | IEEE Xplore

Multi-agent Robust Time Differential Reinforcement Learning Over Communicated Networks


Abstract:

Recently, the researches on multi-agent reinforcement learning (MARL) have attracted tremendous interest in many applications, especially for autonomous driving. The main...Show More

Abstract:

Recently, the researches on multi-agent reinforcement learning (MARL) have attracted tremendous interest in many applications, especially for autonomous driving. The main problem of MARL is how to deal with the uncertainty in the environment and the interaction between the connected agents. To solve the problem, a distributed robust temporal differential deep Q-network algorithm (MARTD-DQN) was developed in this paper. MARTD-DQN consists of two parts, the decentralized MARL algorithm (DMARL) and the robust TD deep Q-network algorithm (RTD-DQN). DMARL improves the robustness of the policy estimation by fusing the states from the neighbors over communicated networks. RTD- DQN improves the robustness to outliers through on-line estimation of the uncertainty. By combining the two algorithms, the proposed algorithm can be robust not only to node failures but also to the outliers. Then the proposed algorithm is applied to ACC simulations of autonomous cars. The simulation results are given to show the efficiency of the proposed algorithm.
Date of Conference: 25-27 July 2018
Date Added to IEEE Xplore: 07 October 2018
ISBN Information:
Electronic ISSN: 1934-1768
Conference Location: Wuhan, China

Contact IEEE to Subscribe

References

References is not available for this document.