Effects of Explanations and Accuracy on Human Performance and Trust in AI-Assisted Anomaly Diagnosis Tasks
Journal of Cognitive Engineering and Decision Making, 19(4), 453-473 2025
P. Dutta, P. K. Josan, R. K. W. Wong, B. Dunbar, D. Selva and A. Diaz-Artiles
[journal]

Abstract

The increasing popularity of Artificial Intelligence (AI) emphasizes the need for a deeper understanding of the factors that can influence its use. Providing accurate explanations may be a critical factor to help establish trust and increase the likelihood of the operator following an AI agent’s recommendations. However, the impact of explanations on human performance, trust, mental workload (MW) and situation awareness (SA), and confidence are not fully understood. In this paper, we investigate the effects of providing explanations about the agent’s recommendations (of varying levels of accuracy) on human operator’s performance, trust, MW, SA, and confidence compared to an agent that provides no explanations. Thirty individuals were divided into three groups and randomly assigned to an agent accuracy level (high, medium, low). Each participant completed two sessions using the AI agent: one with explanations, one without explanations. In each session, the user’s performance was measured objectively (number of anomalies correctly diagnosed and time to diagnosis), trust, MW, SA, and confidence in their response were measured using surveys. Participants’ performance, trust, SA, and confidence were higher in the sessions when the agent provided explanations for its recommendations. Accuracy had a significant effect on performance, MW and user confidence but not on trust, or SA.