Abstract:
To understand the complex nature of the Artificial Intelligence (AI) model, the model needs to be more trustable, transparent, scalable, understandable, and explainable. ...Show MoreMetadata
Abstract:
To understand the complex nature of the Artificial Intelligence (AI) model, the model needs to be more trustable, transparent, scalable, understandable, and explainable. The trust of the AI model is concluded based on the decision taken by the AI model in its black box environment. Thus, Explainable AI (XAI) helps the developers to understand how the AI model behaves/performs while making a particular decision. With more complex AI models, scientists face difficulty in understanding the model outcome. Hence, XAI is required to explain the decision-making process of an AI model. However, to build trust-based AI models, organization embeds ethical principles in the AI processes. In our research paper, we studied the case of the banking sector where an inefficient onboarding process fails to establish a customer-based relationship. Due to the inefficient onboarding process, banks lose users’ faith which creates a gap in the customer-based relationship and hampers the onboarding process. To bridge this gap, we explain the decision-making process of the AI model through XAI.
Published in: 2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)
Date of Conference: 16-16 February 2023
Date Added to IEEE Xplore: 23 May 2023
ISBN Information: