Human-inspired computational fairness

  • Authors:
  • Steven Jong;Karl Tuyls

  • Affiliations:
  • Computational Modelling Lab, Vrije Universiteit Brussel, Brussels, Belgium 1050 and Department of Knowledge Engineering, Maastricht University, Maastricht, The Netherlands 6200 MD;Department of Knowledge Engineering, Maastricht University, Maastricht, The Netherlands 6200 MD

  • Venue:
  • Autonomous Agents and Multi-Agent Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

In many common tasks for multi-agent systems, assuming individually rational agents leads to inferior solutions. Numerous researchers found that fairness needs to be considered in addition to individual reward, and proposed valuable computational models of fairness. In this paper, we argue that there are two opportunities for improvement. First, existing models are not specifically tailored to addressing a class of tasks named social dilemmas, even though such tasks are quite common in the context of multi-agent systems. Second, the models generally rely on the assumption that all agents will and can adhere to these models, which is not always the case. We therefore present a novel computational model, i.e., human-inspired computational fairness. Upon being confronted with social dilemmas, humans may apply a number of fully decentralized sanctioning mechanisms to ensure that optimal, fair solutions emerge, even though some participants may be deciding purely on the basis of individual reward. In this paper, we show how these human mechanisms may be computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.