Experimental Design & Pilot Testing for ECLSS Anomaly Resolution Using Daphne-at Virtual Assistant
Human Factors and Ergonomics Society Annal Meeting 2022
P. Dutta, P. K. Josan, R. K. W. Wong, B. Dunbar, D. Selva and A. Diaz-Artiles
[proceedings]

Abstract

Artificial Intelligence (AI) agents have the potential to play a critical role in human spaceflight by being the first point of contact for astronauts during emergencies in a long duration spaceflight mission, when communication delays with the Mission Control Crew (MCC) become longer and more frequent. Sharing some of tasks done by the MCC with an on-board AI agent that can provide assistance with treating in-flight anomalies gives the crew more autonomy and allows them to respond faster and more safely to emergencies, or focus on other critical aspects of their mission. For this technology to succeed in this role, providing accurate diagnoses for the anomalies encountered is obviously important. However, providing explanations for those recommendations may also be a critical factor to help establish trust and increase the likelihood of the crew actually following the agent’s recommendations. This paper evaluates the effects of explanations on crew performance, trust in automation, mental workload, and situational awareness in the context of anomaly resolution tasks. We conduct a study where subjects subjects work with the AI agent on various simulated anomaly scenarios. They are provided by the agent with a diagnosis and recommendation with no explanations in one session and with explanations for its diagnosis in the other session. We measure the effects of the explanations on Trust in Automation, Situational Awareness, Performance, and Workload. We discuss our findings and provide design recommendations for AI agents in the context of anomaly treatment in Long Duration Exploration Missions (LDEM).