Graphical models in continuous domains for multiagent reinforcement learning

  • Authors:
  • Scott Proper;Kagan Tumer

  • Affiliations:
  • Oregon State University, Corvallis, OR, USA;Oregon State University, Corvallis, OR, USA

  • Venue:
  • Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we test two coordination methods -- difference rewards and coordination graphs -- in a continuous, multiagent rover domain using reinforcement learning, and discuss the situations in which each of these methods perform better alone or together, and why. We also contribute a novel method of applying coordination graphs in a continuous domain by taking advantage of the wire-fitting approach used to handle continuous state and action spaces.