Designs for explaining intelligent agents

  • Authors:
  • Steven R. Haynes;Mark A. Cohen;Frank E. Ritter

  • Affiliations:
  • College of Information Sciences & Technology, Penn State University, 301J IST Building, University Park, PA 16802, USA;Department of Business Administration, Computer Science, & Information Technology, Lock Haven University, Lock Haven, PA 17745, USA;College of Information Sciences & Technology, Penn State University, 301J IST Building, University Park, PA 16802, USA

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Explanation is an important capability for usable intelligent systems, including intelligent agents and cognitive models embedded within simulations and other decision support systems. Explanation facilities help users understand how and why an intelligent system possesses a given structure and set of behaviors. Prior research has resulted in a number of approaches to provide explanation capabilities and identified some significant challenges. We describe designs that can be reused to create intelligent agents capable of explaining themselves. The designs include ways to provide ontological, mechanistic, and operational explanations. These designs inscribe lessons learned from prior research and provide guidance for incorporating explanation facilities into intelligent systems. The designs are derived from both prior research on explanation tool design and from the empirical study reported here on the questions users ask when working with an intelligent system. We demonstrate the use of these designs through examples implemented using the Herbal high-level cognitive modeling language. These designs can help build better agents-they support creating more usable and more affordable intelligent agents by encapsulating prior knowledge about how to generate explanations in concise representations that can be instantiated or adapted by agent developers.