1. Introduction
Deep Learning has resulted in unprecedented progress in many of the computer vision applications such as classification [1], segmentation [2] and object recognition [3]. In terms of performance metrics such as classification accuracy, deep learning has outperformed the best of classical methods by a large margin [1]. However, one of the key issues with Deep Neural Networks is the explicability of the model. In traditional image processing algorithms, features are usually handcrafted using methods such as SIFT [] and HoG [5], which are very intuitive to understand, visualize and explain. However, in a deep learning framework, features are learned by the model, and these are generated using complex nonlinear mappings from pixel space [6]. This makes it hard to understand features that are important for a given task. Explainable models are very important in applications such as autonomous navigation, medical diagnosis and surveillance systems. Explainability is necessary for legal compliance, identifying biases in the developed model and to improve accountability of failure cases. In order to address these issues, there have been several works [7, 8, 9] on the visualization of various aspects of the Deep Convolutional Networks. This includes visualization of filters, activation maps, image-specific saliency maps and visualization of the important features of a trained model [8, 10]. Visualizing the important features of a trained model is useful to understand the inherent patterns or features that the model uses to make an inference. Such visualizations can help in validating the model and ensuring that the model does not overfit to some features that may be very specific to the domain in hand. This method can be used to test the generalizability of the model to unseen data.