A convergent multiagent reinforcement learning approach for a subclass of cooperative stochastic games

  • Authors:
  • Thomas Kemmerich;Hans Kleine Büning

  • Affiliations:
  • International Graduate School Dynamic Intelligent Systems, University of Paderborn, Paderborn, Germany;Department of Computer Science, University of Paderborn, Paderborn, Germany

  • Venue:
  • ALA'11 Proceedings of the 11th international conference on Adaptive and Learning Agents
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a distributed Q-Learning approach for independently learning agents in a subclass of cooperative stochastic games called cooperative sequential stage games. In this subclass, several stage games are played one after the other. We also propose a transformation function for that class and prove that transformed and original games have the same set of optimal joint strategies. Under the condition that the played game is obtained through transformation, it will be proven that our approach converges to an optimal joint strategy for the last stage game of the transformed game and thus also for the original game. In addition, the ability to converge to ε-optimal joint strategies for each of the stage games is shown. The environment in our approach does not need to present a state signal to the agents. Instead, by the use of the aforementioned transformation function, the agents gain knowledge about state changes from an engineered reward. This allows agents to omit storing strategies for each single state, but to use only one strategy that is adapted to the currently played stage game. Thus, the algorithm has very low space requirements and its complexity is comparable to single agent Q-Learning. Besides theoretical analyses, we also underline the convergence properties with some experiments.