The Dynamics of Multi-Agent Reinforcement Learning

  • Authors:
  • Luke Dickens;Krysia Broda;Alessandra Russo

  • Affiliations:
  • Imperial College London, UK, email: {luke.dickens03,k.broda,a.russo}@imperial.ac.uk;Imperial College London, UK, email: {luke.dickens03,k.broda,a.russo}@imperial.ac.uk;Imperial College London, UK, email: {luke.dickens03,k.broda,a.russo}@imperial.ac.uk

  • Venue:
  • Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Infinite-horizon multi-agent control processes with non-determinism and partial state knowledge have particularly interesting properties with respect to adaptive control, such as the non-existence of Nash Equilibria (NE) or non-strict NE which are nonetheless points of convergence. The identification of reinforcement learning (RL) algorithms that are robust, accurate and efficient when applied to these general multi-agent domains is an open, challenging problem. This paper uses learning pressure fields as a means for evaluating RL algorithms in the context of multi-agent processes. Specifically, we show how to model partially observable infinite-horizon stochastic processes (single-agent) and games (multi-agent) within the Finite Analytic Stochastic Process framework. Taking long term average expected returns as utility measures, we show the existence of learning pressure fields: vector fields --similar to the dynamics of evolutionary game theory, which indicate medium and long term learning behaviours of agents independently seeking to maximise this utility. We show empirically that these learning pressure fields are followed closely by policy-gradient RL algorithms.