Abstract:
The research focuses on developing a decision-making methodology that combines human intelligence (HI) and AI, emphasizing interpretability of AI results. The methodology...Show MoreMetadata
Abstract:
The research focuses on developing a decision-making methodology that combines human intelligence (HI) and AI, emphasizing interpretability of AI results. The methodology formalizes decision trials using feature vectors and probability distributions for AI recommendations and HI proposals. The reliability of both AI and HI decisions is crucial for effective decision-making. Trust and interpretability in AI-generated clinical decisions are essential for successful implementation. An experiment involving image classification tasks was conducted, examining human attitudes, trust, and decision-making behaviour concerning AI recommendations. Three scenarios – HI-decision, AI-decision, and joint HI-AI decision – were evaluated. Expected Calibration Errors (ECEs) were below 10%, with AI exhibiting an ECEAIof 9.7% and human an ECEH of 6.2%. ECEs were used as uncertainty scores to optimize joint decision-making rule. The trust of humans in AI was evaluated, leading to improved HI accuracy. The final decision relied on the interpretability of AI results, resulting in a 6% improvement in initial HI accuracy.
Date of Conference: 07-09 September 2023
Date Added to IEEE Xplore: 21 December 2023
ISBN Information: