Modeling how humans reason about others with partial information

  • Authors:
  • Sevan G. Ficici;Avi Pfeffer

  • Affiliations:
  • Harvard University, Cambridge, Massachusetts;Harvard University, Cambridge, Massachusetts

  • Venue:
  • Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computer agents participate in many collaborative and competitive multiagent domains in which humans make decisions. For computer agents to interact successfully with people in such environments, an understanding of human reasoning is beneficial. In this paper, we investigate the question of how people reason strategically about others under uncertainty and the implications of this question for the design of computer agents. Using a situated partial-information negotiation game, we conduct human-subjects trials to obtain data on human play. We then construct a hierarchy of models that explores questions about human reasoning: Do people explicitly reason about other players in the game? If so, do people also consider the possible states of other players for which only partial information is known? Is it worth trying to capture such reasoning with computer models and subsequently utilize them in computer agents? We compare our models on their fit to collected data. We then construct computer agents that use our models in one of two ways: emulating human behavior and playing best response to the model. After building our agents, we deploy them in further human-subjects trials for evaluation. Our results indicate that people do reason about other players in our game and also reason under uncertainty. Better models are shown to yield more successful computer agents.