Solving delayed coordination problems in MAS

  • Authors:
  • Yann-Michaël De Hauwere;Peter Vrancx;Ann Nowé

  • Affiliations:
  • Vrije Universiteit Brussel Pleinlaan, Brussel, BELGIUM;Vrije Universiteit Brussel Pleinlaan, Brussel, BELGIUM;Vrije Universiteit Brussel Pleinlaan, Brussel, BELGIUM

  • Venue:
  • The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent research has demonstrated that considering local interactions among agents in specific parts of the state space, is a successful way of simplifying the multi-agent learning process. By taking into account other agents only when a conflict is possible, an agent can significantly reduce the state-action space in which it learns. Current approaches, however, consider only the immediate rewards for detecting conflicts. This restriction is not suitable for realistic systems, where rewards can be delayed and often conflicts between agents become apparent only several time-steps after an action has been taken. In this paper, we contribute a reinforcement learning algorithm that learns where a strategic interaction among agents is needed, several time-steps before the conflict is reflected by the (immediate) reward signal.