Do you get it? user-evaluated explainable BDI agents

  • Authors:
  • Joost Broekens;Maaike Harbers;Koen Hindriks;Karel Van Den Bosch;Catholijn Jonker;John-Jules Meyer

  • Affiliations:
  • Delft University of Technology;Utrecht University;Delft University of Technology;TNO Institute of Defence, Security and Safety, The Netherlands;Delft University of Technology;Utrecht University

  • Venue:
  • MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user evaluation of different explanations is essential. In this paper we test the hypothesis that different explanation types are needed to explain different types of actions. We present three different, generically applicable, algorithms that automatically generate different types of explanations for actions of BDI-based agents. Quantitative analysis of a user experiment (n=30), in which users rated the usefulness and naturalness of each explanation type for different agent actions, supports our hypothesis. In addition, we present feedback from the users about how they would explain the actions themselves. Finally, we hypothesize guidelines relevant for the development of explainable BDI agents.