The use of Artificial Intelligence (AI) in credit scoring has revolutionized the financial industry by enabling institutions to assess risk more efficiently and accurately. However, the lack of transparency in AI models has raised concerns about their fairness, accountability, and trustworthiness. This paper investigates the application of explainable AI (XAI) models in credit scoring within financial institutions. We explore the challenges faced by traditional AI models in providing insights into their decision-making processes and argue for the integration of XAI techniques to enhance the interpretability of AI-driven credit scoring systems. Through a comprehensive literature review, we identify various XAI methods, including decision trees, rule-based models, and feature importance analysis. We discuss the potential benefits of XAI in improving the explainability, fairness, and trustworthiness of credit scoring models. Furthermore, we highlight the importance of regulatory compliance and ethical considerations when incorporating XAI into credit scoring practices. This paper concludes by emphasizing the need for ongoing research and development in XAI to ensure the sustainable and responsible use of AI in the financial sector.
Harris, J. Explainable AI Models for Credit Scoring in Financial Institutions. Information Sciences and Technological Innovations, 2021, 3, 25. https://doi.org/10.69610/j.isti.20211221
AMA Style
Harris J. Explainable AI Models for Credit Scoring in Financial Institutions. Information Sciences and Technological Innovations; 2021, 3(2):25. https://doi.org/10.69610/j.isti.20211221
Chicago/Turabian Style
Harris, James 2021. "Explainable AI Models for Credit Scoring in Financial Institutions" Information Sciences and Technological Innovations 3, no.2:25. https://doi.org/10.69610/j.isti.20211221
APA style
Harris, J. (2021). Explainable AI Models for Credit Scoring in Financial Institutions. Information Sciences and Technological Innovations, 3(2), 25. https://doi.org/10.69610/j.isti.20211221
Article Metrics
Article Access Statistics
References
Burbules, N. C., & Callister, T. A. (2000). Watch IT: The Risks and Promises of Information Technologies for Education. Westview Press.
Wachter, S., & Mittelstadt, B. (2018). Black Box explains AI decisions with Bayesian networks. arXiv preprint arXiv:1801.09335.
Provost, F., & Fawcett, T. (2013). Data science and machine learning: The text mining approach. Morgan Kaufmann.
Kohavi, R., et al. (2009). Building a Better Teacher: Improving Accuracy and Robustness Through Synthesized Examples. In Proceedings of the 26th Annual International Conference on Machine Learning (p. 805–812).
Vanschoren, J., et al. (2018). Data Science for Business. O'Reilly Media.
Ribeiro, M. T., et al. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144).
Lundberg, S. M., et al. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4408–4417).
Benitez-Serrano, J., et al. (2019). Explainable AI for credit scoring: A systematic review and a research agenda. Information Systems Frontiers, 21(4), 845-864.
Doshi, V., & Kim, B. (2017). Towards a robust form of interpretability. In Proceedings of the 1st Conference on Fairness, Accountability, and Transparency (pp. 396–405).
Seligman, D., et al. (2018). Explainable AI: Interpreting model predictions. arXiv preprint arXiv:1806.07542.