EXPLAINABLE AI FOR UNDERSTANDING HUMAN DECISION MAKING PATTERNS

Authors

  • PRIYA DALAL, BHARTI SHARMA, TRIPTI SHARMA, PUNEET GARG, KAHKSHA AHMED

Keywords:

Explainable artificial intelligence; human decision making; cognitive load; trust and reliance; human–computer interaction; interpretability; evaluation metrics.

Abstract

Explainable artificial intelligence (XAI) has become an essential research area for making complex Machine‑Learning models transparent, trustworthy and actionable for human decision makers. As artificial intelligence (AI) systems increasingly influence high‑stakes decisions in domains such as finance, healthcare and education, understanding how explanations impact human judgment is critical. This paper presents a comprehensive examination of XAI for understanding human decision‑making patterns, synthesising theoretical foundations, recent empirical findings and design considerations. The paper develops conceptual frameworks linking explanation types to cognitive processes, summarises empirical evidence regarding the effects of explanations on task performance, trust and cognitive load, and discusses challenges such as the white‑box paradox, algorithmic aversion and the risk of overreliance. We further propose guidelines for designing human‑centred XAI systems that align with users’ mental models, support various stakeholder needs and incorporate mechanisms to recognise when explanations are insufficient. Finally, we highlight open challenges and future directions for research at the intersection of XAI and human decision‑making.

Downloads

How to Cite

PRIYA DALAL, BHARTI SHARMA, TRIPTI SHARMA, PUNEET GARG, KAHKSHA AHMED. (2025). EXPLAINABLE AI FOR UNDERSTANDING HUMAN DECISION MAKING PATTERNS. TPM – Testing, Psychometrics, Methodology in Applied Psychology, 32(S7 (2025): Posted 10 October), 412–427. Retrieved from https://tpmap.org/submission/index.php/tpm/article/view/2132