A Study into Preferred Explanations of Virtual Agent Behavior

  • Authors:
  • Maaike Harbers;Karel Bosch;John-Jules Ch. Meyer

  • Affiliations:
  • Utrecht University, Utrecht, The Netherlands 3508 and TNO Human Factors, Soesterberg, The Netherlands 3769;TNO Human Factors, Soesterberg, The Netherlands 3769;Utrecht University, Utrecht, The Netherlands 3508

  • Venue:
  • IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Virtual training systems provide an effective means to train people for complex, dynamic tasks such as crisis management or firefighting. Intelligent agents are often used to play the characters with whom a trainee interacts. To increase the trainee's understanding of played scenarios, several accounts of agents that can explain the reasons for their actions have been proposed. This paper describes an empirical study of what instructors consider useful agent explanations for trainees. It was found that different explanations types were preferred for different actions, e.g. conditions enabling action execution, goals underlying an action, or goals that become achievable after action execution. When an action has important consequences for other agents, instructors suggest that the others' perspectives should be part of the explanation.