Abstract:
In recent years, there has been a significant increase in the use of artificial intelligence (AI) models for predicting disease from patient symptoms. However, these mode...Show MoreMetadata
Abstract:
In recent years, there has been a significant increase in the use of artificial intelligence (AI) models for predicting disease from patient symptoms. However, these models are often considered black boxes, as they lack transparency in how they make their predictions. This lack of transparency raises concerns about the reliability and trustworthiness of these models. To address this issue, explainable AI (XAI) techniques have been developed to provide insights into how these models work. One such technique is LIME (Local Interpretable Model-agnostic Explanations), which generates explanations for individual predictions by approximating the behavior of the model locally. In this paper, we proposed a novel approach that combines LIME with AI models for predicting disease from patient symptoms. We also applied Recursive Feature Elimination with Cross Validation (RFECV) to diagnose disease from less features. We have shown that this approach provides almost accurate predictions and interpretable explanations for those predictions. The prediction accuracy of 91.57%, 99.59%, 99.59%, 99.59%, 99.59%, and 99.59% have been achieved for Logistic regression, Decision tree, Random forest, Adaboost classifier, Gradient boosting, and Light gradient boosted machine models respectively. Our results suggest that the proposed approach has the potential to improve the trustworthiness and reliability of AI models for predicting disease from patient symptoms.
Published in: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT)
Date of Conference: 06-08 July 2023
Date Added to IEEE Xplore: 23 November 2023
ISBN Information: