Are Explanations Helpful Under Uncertainty? Effects of Uncertainty in AI-Assisted Spacecraft Anomaly Diagnosis
Journal of Cognitive Engineering and Decision Making 2026+
P. Dutta, P. K. Josan, R. K. W. Wong, B. Dunbar, D. Selva and A. Diaz-Artiles
[journal]

Abstract

This study contributes to the domain of explainable AI (XAI) by examining how uncertainty in an AI agent’s diagnosis affects human decision-making in high-stakes, time-critical environments. We conducted a randomized controlled experiment in which participants diagnosed simulated spacecraft anomalies with the help of an AI agent named Daphne. Each participant completed two sessions with different levels of explainability: one in which Daphne provided Basic explanations (qualitative likelihood scores) and another in which it offered Advanced explanations (quantitative likelihood scores with detailed justifications). Each session included a balanced set of scenarios featuring high and low diagnostic uncertainty. We evaluated the participants’ task performance, trust in the AI agent, reliance on its recommendations, satisfaction, and self-reported confidence. Across all measures, the participants’ performance declined significantly under high uncertainty conditions. However, Advanced explanations were particularly effective in improving performance when uncertainty was high, suggesting that explanation depth plays a key role in mitigating the negative effects uncertainty. These results highlight the importance of transparency in AI systems designed for decision-making. Our findings offer practical insights into the design of XAI tools that can better support human operators, especially in safety-critical domains like spaceflight, where managing uncertainty is a key challenge in effective human-AI collaboration.