Interpreting Deep Neural Networks through Model Transformation: Literature Review | IEEE Conference Publication | IEEE Xplore

Interpreting Deep Neural Networks through Model Transformation: Literature Review


Abstract:

Machine learning especially deep learning models have achieved state-of-the-art performances in many fields such as automatic driving, speech recognition, facial expressi...Show More

Abstract:

Machine learning especially deep learning models have achieved state-of-the-art performances in many fields such as automatic driving, speech recognition, facial expression recognition and so on. However, these models are usually less interpretable which means that it is hard for people to understand and trust the decisions they made. This paper focuses on improving interpretability of deep neural network (DNN) through model transformation, in which the behaviors of DNNs are approximated by transparent models, such as decision tree or rules. We provide a comprehensive literature review for model transformation methods from different aspects, including type of interpretable models, structure of model transformation, and type of model transformation. The characteristics and perspectives of the model transformation approach are also explored.
Date of Conference: 25-27 July 2022
Date Added to IEEE Xplore: 11 October 2022
ISBN Information:

ISSN Information:

Conference Location: Hefei, China

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.