Abstract:
While artificial Intelligence (AI) has shown promising results in the healthcare field, it is undeniable that there are risks associated with AI in healthcare that societ...Show MoreMetadata
Abstract:
While artificial Intelligence (AI) has shown promising results in the healthcare field, it is undeniable that there are risks associated with AI in healthcare that society must acknowledge. This paper presents a comprehensive systems modeling framework aimed at evaluating trust and being responsive to reasons for mistrust and distrust in AI-assisted medical diagnosis, with a specific focus on the diagnosis of cardiac sarcoidosis, utilizing Explainable Artificial Intelligence (XAI) techniques. The design includes two primary sections: 1. Identifying the most and least disruptive scenarios to the system, as well as the most important initiatives for the system. 2. Utilizing XAI techniques such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Anchors models to provide explanations on how machine learning models justify their outcomes. The findings indicate the significance of employing explainable AI in critical domains like healthcare, where lives of patients are at risk. XAI can be employed to analyze the outcomes for AI users, determining the significance of features, improving comprehension of AI outputs, enhancing transparency, explainability, and interpretability of AI outputs, and facilitating data assessment.
Date of Conference: 08-10 May 2024
Date Added to IEEE Xplore: 12 August 2024
ISBN Information: