Fairness in multi-agent systems
The Knowledge Engineering Review
Artificial agents learning human fairness
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Fairness in multi-agent systems
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: doctoral mentoring program
Priority awareness: towards a computational model of human fairness for multi-agent systems
ALAMAS'05/ALAMAS'06/ALAMAS'07 Proceedings of the 5th , 6th and 7th European conference on Adaptive and learning agents and multi-agent systems: adaptation and multi-agent learning
Bee behaviour in multi-agent systems: a bee foraging algorithm
ALAMAS'05/ALAMAS'06/ALAMAS'07 Proceedings of the 5th , 6th and 7th European conference on Adaptive and learning agents and multi-agent systems: adaptation and multi-agent learning
Hi-index | 0.00 |
In this paper, we introduce a nature-inspired multiagent system for the task domain of resource distribution in large storage facilities. The system is based on potential fields and swarm intelligence, in which straightforward path planning is integrated. We show both experimentally and theoretically that the system is adaptive, robust and scalable. Moreover, we show that the planning component helps to overcome common pitfalls for nature-inspired systems in the task assignment domain. We end this paper with a discussion of an additional requirement for multi-agent systems interacting with humans: functionality. More precisely, we argue that such systems must behave in a fair way to be functional. We illustrate how fairness can be measured and illustrate that our system behaves in a moderately fair manner.