Simultaneously modeling humans' preferences and their beliefs about others' preferences

  • Authors:
  • Sevan G. Ficici;Avi Pfeffer

  • Affiliations:
  • Harvard University, Cambridge, Massachusetts;Harvard University, Cambridge, Massachusetts

  • Venue:
  • Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In strategic multiagent decision making, it is often the case that a strategic reasoner must hold beliefs about other agents and use these beliefs to inform its decision making. The behavior thus produced by the reasoner involves an interaction between the reasoner's beliefs about other agents and the reasoner's own preferences. A significant challenge faced by model designers, therefore, is how to model such a reasoner's behavior so that the reasoner's preferences and beliefs can each be identified and distinguished from each other. In this paper, we introduce a model of strategic reasoning that allows us to distinguish between the reasoner's utility function and the reasoner's beliefs about another agent's utility function as well as the reasoner's beliefs about how that agent might interact with yet other agents. We show that our model is uniquely identifiable. That is, no two different parameter settings will cause the model to give the same behavior over all possible inputs. We then illustrate the performance of our model in a multiagent negotiation game played by human subjects. We find that our subjects have slightly incorrect beliefs about other agents in the game.