Abstract:
Machine learning especially deep learning models have achieved state-of-the-art performances in many fields such as automatic driving, speech recognition, facial expressi...Show MoreMetadata
Abstract:
Machine learning especially deep learning models have achieved state-of-the-art performances in many fields such as automatic driving, speech recognition, facial expression recognition and so on. However, these models are usually less interpretable which means that it is hard for people to understand and trust the decisions they made. This paper focuses on improving interpretability of deep neural network (DNN) through model transformation, in which the behaviors of DNNs are approximated by transparent models, such as decision tree or rules. We provide a comprehensive literature review for model transformation methods from different aspects, including type of interpretable models, structure of model transformation, and type of model transformation. The characteristics and perspectives of the model transformation approach are also explored.
Published in: 2022 41st Chinese Control Conference (CCC)
Date of Conference: 25-27 July 2022
Date Added to IEEE Xplore: 11 October 2022
ISBN Information: