Understanding the Reason for Misclassification by Generating Counterfactual Images | IEEE Conference Publication | IEEE Xplore

Understanding the Reason for Misclassification by Generating Counterfactual Images


Abstract:

Explainable AI (XAI) methods contribute to understanding the behavior of deep neural networks (DNNs), and have attracted interest recently. For example, in image classifi...Show More

Abstract:

Explainable AI (XAI) methods contribute to understanding the behavior of deep neural networks (DNNs), and have attracted interest recently. For example, in image classification tasks, attribution maps have been used to indicate the pixels of an input image that are important to the output decision. Oftentimes, however, it is difficult to understand the reason for misclassification only from a single attribution map. In this paper, in order to enhance the information related to the reason for misclassification, we propose to generate several counterfactual images using generative adversarial networks (GANs). We empirically show that these counterfactual images and their attribution maps improve the interpretability of misclassified images. Furthermore, we additionally propose to generate transitional images by gradually changing the configurations of a GAN in order to understand clearly which part of the misclassified image cause the misclassification.
Date of Conference: 25-27 July 2021
Date Added to IEEE Xplore: 19 August 2021
ISBN Information:
Conference Location: Aichi, Japan

Contact IEEE to Subscribe

References

References is not available for this document.