Coordinated exploration in multi-agent reinforcement learning: an application to load-balancing

  • Authors:
  • Katja Verbeeck;Ann Nowé;Karl Tuyls

  • Affiliations:
  • Vrije Universiteit Brussel, Brussel, Belgium;Vrije Universiteit Brussel, Brussel, Belgium;University of Limburg, Diepenbeek, Belgium

  • Venue:
  • Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper is concerned with how multi-agent reinforcement learning algorithms can practically be applied to real-life problems. Recently, a new coordinated multi-agent exploration mechanism, called Exploring Selfish Reinforcement Learning (ESRL) was proposed. With this mechanism, a group of independent agents can find optimal fair solutions in multi-agent problems, without the need for modeling other agents, without the need of knowing the type of the multiagent problem confronted with and by using only a limited form of communication. In particular, the mechanism allows for using natural reinforcement signals coming from the application itself. We report on how ESRL agents can solve the problem of load-balancing in a natural way, both as a common interest and as a conflicting interest problem.