Building explainable artificial intelligence systems

  • Authors:
  • Mark G. Core;H. Chad Lane;Michael van Lent;Dave Gomboc;Steve Solomon;Milton Rosenberg

  • Affiliations:
  • The Institute for Creative Technologies, The University of Southern California, Marina del Rey, CA;The Institute for Creative Technologies, The University of Southern California, Marina del Rey, CA;The Institute for Creative Technologies, The University of Southern California, Marina del Rey, CA;The Institute for Creative Technologies, The University of Southern California, Marina del Rey, CA;The Institute for Creative Technologies, The University of Southern California, Marina del Rey, CA;The Institute for Creative Technologies, The University of Southern California, Marina del Rey, CA

  • Venue:
  • IAAI'06 Proceedings of the 18th conference on Innovative applications of artificial intelligence - Volume 2
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems are not modular and not portable; they are tied to a particular AI system. In this paper, we present a modular and generic architecture for explaining the behavior of simulated entities. We describe its application to the Virtual Humans, a simulation designed to teach soft skills such as negotiation and cultural awareness.