Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Adaptive agents in a persistent shout double auction
Proceedings of the first international conference on Information and computation economies
Evolutionary game theory and multi-agent reinforcement learning
The Knowledge Engineering Review
Robust and Scalable Coordination of Potential-Field Driven Agents
CIMCA '06 Proceedings of the International Conference on Computational Inteligence for Modelling Control and Automation and International Conference on Intelligent Agents Web Technologies and International Commerce
If multi-agent learning is the answer, what is the question?
Artificial Intelligence
A Short Introduction to Computational Social Choice
SOFSEM '07 Proceedings of the 33rd conference on Current Trends in Theory and Practice of Computer Science
Fairness in multi-agent systems
The Knowledge Engineering Review
Artificial agents learning human fairness
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Learning to reach agreement in a continuous ultimatum game
Journal of Artificial Intelligence Research
Human-inspired computational fairness
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
Many multi-agent systems are intended to operate together with or as a service to humans. Typically, multi-agent systems are designed assuming perfectly rational, self-interested agents, according to the principles of classical game theory. However, research in the field of behavioral economics shows that humans are not purely self-interested; they strongly care about whether their rewards are fair. Therefore, multi-agent systems that fail to take fairness into account, may not be sufficiently aligned with human expectations and may not reach intended goals. Two important motivations for fairness have already been identified and modelled, being (i) inequity aversion and (ii) reciprocity. We identify a third motivation that has not yet been captured: priority awareness.We show how priorities may be modelled and discuss their relevance for multi-agent research.