Priority awareness: towards a computational model of human fairness for multi-agent systems

  • Authors:
  • Steven De Jong;Karl Tuyls;Katja Verbeeck;Nico Roos

  • Affiliations:
  • MICC/IKAT, Maastricht University, The Netherlands;MICC/IKAT, Maastricht University, The Netherlands;MICC/IKAT, Maastricht University, The Netherlands;MICC/IKAT, Maastricht University, The Netherlands

  • Venue:
  • ALAMAS'05/ALAMAS'06/ALAMAS'07 Proceedings of the 5th , 6th and 7th European conference on Adaptive and learning agents and multi-agent systems: adaptation and multi-agent learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many multi-agent systems are intended to operate together with or as a service to humans. Typically, multi-agent systems are designed assuming perfectly rational, self-interested agents, according to the principles of classical game theory. However, research in the field of behavioral economics shows that humans are not purely self-interested; they strongly care about whether their rewards are fair. Therefore, multi-agent systems that fail to take fairness into account, may not be sufficiently aligned with human expectations and may not reach intended goals. Two important motivations for fairness have already been identified and modelled, being (i) inequity aversion and (ii) reciprocity. We identify a third motivation that has not yet been captured: priority awareness.We show how priorities may be modelled and discuss their relevance for multi-agent research.