Abstract:
Deep convolutional neural network architectures have in recent years been widely used for enhancing various Computer vision tasks, such as Image classification, Semantic ...Show MoreMetadata
Abstract:
Deep convolutional neural network architectures have in recent years been widely used for enhancing various Computer vision tasks, such as Image classification, Semantic Segmentation and Object detection. With great advancements in terms of quality of the obtained results, the path was paved for using these kinds of neural networks in the medical domain. But, when working with sensitive matters involving human lives, there is a need to consider the interpretability and explainability of these models and not just the typical evaluation metrics for the given task. To do such a thing, tools such as LIME and PyTorch Grad-CAM can be used, among many others. The integration of Explainable AI (XAI) methods proposed in this paper aims to enable the paradigm of XAI to be used in medical image classification tasks with the standardized MedMNIST dataset. By doing such an integration, a deeper analysis regarding the quality of the model can be enabled. In that way, instances that were misclassified can be visually examined and used to paint a clearer picture of the complete model's decision-making process.
Published in: 2024 11th International Conference on Electrical, Electronic and Computing Engineering (IcETRAN)
Date of Conference: 03-06 June 2024
Date Added to IEEE Xplore: 03 September 2024
ISBN Information: