Loading [MathJax]/extensions/MathMenu.js
Practical Control Design for the Deep Learning Age: Distillation of Deep RL-Based Controllers | IEEE Conference Publication | IEEE Xplore

Practical Control Design for the Deep Learning Age: Distillation of Deep RL-Based Controllers


Abstract:

Deep Reinforcement Learning (RL) methods produce often performance-leading controllers. Yet, in many applications (e.g., when safety and assurance are critical), they can...Show More

Abstract:

Deep Reinforcement Learning (RL) methods produce often performance-leading controllers. Yet, in many applications (e.g., when safety and assurance are critical), they cannot be deployed. Deep Neural Nets (DNN) controllers are nearly impossible to verify and their behavior hard to interpret. In this paper, we first show, through a simple example, that there exist soft decision tree (SDT) controllers that are equivalent to neural network (NN) controllers in terms of input-output behavior. Based on imitation learning, we then show via three OpenAI gym environment case studies that it is possible to distill high-performance DNN controllers into non-DNN controllers (e.g., leveraging decision tree and support vector machine architectures) that, while sacrificing performance just a bit, can be simpler, more interpretable, hence amenable to verification and validation. We thus propose the natural control design paradigm that leverages the power of deep RL methods to design reference DNN controllers, and then distill them into non-DNN controllers which can be validated and deployed in real systems. Finally, we identify some distillation metrics that can be useful in assessing the quality of the distilled controllers.
Date of Conference: 27-30 September 2022
Date Added to IEEE Xplore: 04 November 2022
ISBN Information:
Conference Location: Monticello, IL, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.