Towards Scalable Verification of Deep Reinforcement Learning | IEEE Conference Publication | IEEE Xplore

Towards Scalable Verification of Deep Reinforcement Learning


Abstract:

Deep neural networks (DNNs) have gained significant popularity in recent years, becoming the state of the art in a variety of domains. In particular, deep reinforcement l...Show More

Abstract:

Deep neural networks (DNNs) have gained significant popularity in recent years, becoming the state of the art in a variety of domains. In particular, deep reinforcement learning (DRL) has recently been employed to train DNNs that realize control policies for various types of real-world systems. In this work, we present the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems. To demonstrate the benefits of whiRL 2.0, we apply it to case studies from the communication networks domain that have recently been used to motivate formal verification of DRL systems, and which exhibit characteristics that are conducive for scalable verification. We propose techniques for performing k-induction and semi-automated invariant inference on such systems, and leverage these techniques for proving safety and liveness properties that were previously impossible to verify due to the scalability barriers of prior approaches. Furthermore, we show how our proposed techniques provide insights into the inner workings and the generalizability of DRL systems. whiRL 2.0 is publicly available online.
Date of Conference: 19-22 October 2021
Date Added to IEEE Xplore: 29 November 2021
ISBN Information:

ISSN Information:

Conference Location: New Haven, CT, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.