Minimal mental models

  • Authors:
  • David V. Pynadath;Stacy C. Marsella

  • Affiliations:
  • Information Sciences Institute, University of Southern California, Marina del Rey, CA;Information Sciences Institute, University of Southern California, Marina del Rey, CA

  • Venue:
  • AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Agents must form and update mental models about each other in a wide range of domains: team coordination, plan recognition, social simulation, user modeling, games of incomplete information, etc. Existing research typically treats the problem of forming beliefs about other agents as an isolated subproblem, where the modeling agent starts from an initial set of possible models for another agent and then maintains a belief about which of those models applies. This initial set of models is typically a full specification of possible agent types. Although such a rich space gives the modeling agent high accuracy in its beliefs, it will also incur high cost in maintaining those beliefs. In this paper, we demonstrate that by taking this modeling problem out of its isolation and placing it back within the overall decision-making context, the modeling agent can drastically reduce this rich model space without sacrificing any performance. Our approach comprises three methods. The first method clusters models that lead to the same behaviors in the modeling agent's decision-making context. The second method clusters models that may produce different behaviors, but produce equally preferred outcomes with respect to the utility of the modeling agent. The third technique sacrifices a fixed amount of accuracy by clustering models that lead to performance losses that are below a certain threshold. We illustrate our framework using a social simulation domain and demonstrate its value by showing the minimal mental model spaces that it generates.