Loading [MathJax]/extensions/MathMenu.js
Explainability in Machine Learning & AI Models for Complex Data Structures on Scorecards Development in Retail Banking | IEEE Conference Publication | IEEE Xplore

Explainability in Machine Learning & AI Models for Complex Data Structures on Scorecards Development in Retail Banking


Abstract:

As machine learning (ML) and artificial intelligence (AI) models continue to revolutionize retail banking, their application in tasks such as predicting customer behaviou...Show More

Abstract:

As machine learning (ML) and artificial intelligence (AI) models continue to revolutionize retail banking, their application in tasks such as predicting customer behaviour, assessing credit risk, and detecting fraudulent activities has become increasingly complex. While these advanced models offer significant predictive power, they often operate as “black boxes,” making their decision-making processes opaque and challenging to interpret. This lack of transparency poses critical issues in retail banking, where trust and clarity are paramount. Through a comprehensive review of the literature on machine learning explainability techniques, this work found that traditional scorecards, which rely on straightforward models like logistic regression, have been surpassed by sophisticated AI systems capable of handling high-dimensional, heterogeneous, and temporally interdependent data. However, these systems' complexity undermines stakeholder confidence and understanding. Current explainability approaches, including model-agnostic techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), as well as model-specific techniques such as decision trees and Layer-wise Relevance Propagation (LRP), provide valuable insights but fall short in managing the intricate data structures and interactions characteristic of retail banking. These methods are often limited by their computational demands, instability, and difficulty scaling to more complex models. Therefore, to address these gaps, this study recommends and proposes the development of hybrid models that blend interpretable components with complex AI systems, interactive visualization tools that facilitate dynamic exploration and understanding of model predictions, and the application of natural language processing for automated, accessible explanation generation. By enhancing the explainability of AI models, these approaches aim to improve transparency, regulatory compliance...
Date of Conference: 27-29 November 2024
Date Added to IEEE Xplore: 28 January 2025
ISBN Information:
Conference Location: Vanderbijlpark, South Africa

I. Introduction

As machine learning (ML) and artificial intelligence (AI) continue to drive innovation across various industries, their impact on retail banking has been particularly transformative [1] [2]. These technologies empower financial institutions to leverage vast amounts of data to make more accurate and timely decisions, such as predicting customer behaviour, assessing credit risk, and detecting fraudulent activities [2]. However, the sophisticated models used in these applications can frequently act as “black boxes”, making the decision-making processes obscure and difficult to articulate [3] [4]. This opacity presents significant challenges in the retail banking industry, where trust and transparency are crucial.

Contact IEEE to Subscribe

References

References is not available for this document.