Loading [MathJax]/extensions/MathMenu.js
Enhancing Decision Tree Based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization | IEEE Conference Publication | IEEE Xplore

Enhancing Decision Tree Based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization


Abstract:

One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable appro...Show More

Abstract:

One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.
Date of Conference: 16-19 December 2019
Date Added to IEEE Xplore: 17 February 2020
ISBN Information:
Conference Location: Boca Raton, FL, USA

Contact IEEE to Subscribe

References

References is not available for this document.