Saliency-Driven Class Impressions For Feature Visualization Of Deep Neural Networks | IEEE Conference Publication | IEEE Xplore

Saliency-Driven Class Impressions For Feature Visualization Of Deep Neural Networks


Abstract:

In this paper, we propose a data-free method of extracting Impressions of each class from the classifier's memory. The Deep Learning regime empowers classifiers to extrac...Show More

Abstract:

In this paper, we propose a data-free method of extracting Impressions of each class from the classifier's memory. The Deep Learning regime empowers classifiers to extract distinct patterns (or features) of a given class from training data, which is the basis on which they generalize to unseen data. Before deploying these models on critical applications, it is very useful to visualize the features considered to be important for classification. Existing visualization methods develop high confidence images consisting of both background and foreground features. This makes it hard to judge what the important features of a given class are. In this work, we propose a saliency-driven approach to visualize discriminative features that are considered most important for a given task. Another drawback of existing methods is that, confidence of the generated visualizations is increased by creating multiple instances of the given class. We restrict the algorithm to develop a single object per image, which helps further in extracting features of high confidence, and also results in better visualizations. We further demonstrate the generation of negative images as naturally fused images of two or more classes. Our code is available at: https://giChub.com/val-iisc/Saliency-driven-Class-Impressions.
Date of Conference: 25-28 October 2020
Date Added to IEEE Xplore: 30 September 2020
ISBN Information:

ISSN Information:

Conference Location: Abu Dhabi, United Arab Emirates

1. Introduction

Deep Learning has resulted in unprecedented progress in many of the computer vision applications such as classification [1], segmentation [2] and object recognition [3]. In terms of performance metrics such as classification accuracy, deep learning has outperformed the best of classical methods by a large margin [1]. However, one of the key issues with Deep Neural Networks is the explicability of the model. In traditional image processing algorithms, features are usually handcrafted using methods such as SIFT [] and HoG [5], which are very intuitive to understand, visualize and explain. However, in a deep learning framework, features are learned by the model, and these are generated using complex nonlinear mappings from pixel space [6]. This makes it hard to understand features that are important for a given task. Explainable models are very important in applications such as autonomous navigation, medical diagnosis and surveillance systems. Explainability is necessary for legal compliance, identifying biases in the developed model and to improve accountability of failure cases. In order to address these issues, there have been several works [7, 8, 9] on the visualization of various aspects of the Deep Convolutional Networks. This includes visualization of filters, activation maps, image-specific saliency maps and visualization of the important features of a trained model [8, 10]. Visualizing the important features of a trained model is useful to understand the inherent patterns or features that the model uses to make an inference. Such visualizations can help in validating the model and ensuring that the model does not overfit to some features that may be very specific to the domain in hand. This method can be used to test the generalizability of the model to unseen data.

Contact IEEE to Subscribe

References

References is not available for this document.