Loading [a11y]/accessibility-menu.js
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning | IEEE Journals & Magazine | IEEE Xplore

Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning


Impact Statement:Deep reinforcement learning (DRL) has numerous real-life applications ranging from autonomous driving to healthcare. It has demonstrated superhuman performance in playing...Show More

Abstract:

Deep reinforcement learning (DRL) has numerous applications in the real world, thanks to its ability to achieve high performance in a range of environments with little ma...Show More
Impact Statement:
Deep reinforcement learning (DRL) has numerous real-life applications ranging from autonomous driving to healthcare. It has demonstrated superhuman performance in playing complex games like Go. However, in recent years, many researchers have identified various vulnerabilities of DRL. Keeping this critical aspect in mind, in this article, we present a comprehensive survey of different attacks on DRL and various countermeasures that can be used for robustifying DRL. To the best of our knowledge, this survey is the first attempt at classifying the attacks based on the different components of the DRL pipeline. This article will provide a roadmap for the researchers and practitioners to develop robust DRL systems.

Abstract:

Deep reinforcement learning (DRL) has numerous applications in the real world, thanks to its ability to achieve high performance in a range of environments with little manual oversight. Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications (e.g., smart grids, traffic controls, and autonomous vehicles) unless its vulnerabilities are addressed and mitigated. To address this problem, we provide a comprehensive survey that discusses emerging attacks on DRL-based systems and the potential countermeasures to defend against these attacks. We first review the fundamental background on DRL and present emerging adversarial attacks on machine learning techniques. We then investigate the vulnerabilities that an adversary can exploit to attack DRL along with state-of-the-art countermeasures to prevent such attacks. Finally, we highlight open issues and research challenges for developing solutions to deal with attacks on DRL-based intelligent systems.
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 3, Issue: 2, April 2022)
Page(s): 90 - 109
Date of Publication: 13 September 2021
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.