Networks of learning automata and limiting games

  • Authors:
  • Peter Vrancx;Katja Verbeeck;Ann Nowé

  • Affiliations:
  • Computational Modeling Lab, Vrije Universiteit Brussel;MICC-IKAT Maastricht University;Computational Modeling Lab, Vrije Universiteit Brussel

  • Venue:
  • ALAMAS'05/ALAMAS'06/ALAMAS'07 Proceedings of the 5th , 6th and 7th European conference on Adaptive and learning agents and multi-agent systems: adaptation and multi-agent learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning Automata (LA) were recently shown to be valuable tools for designing Multi-Agent Reinforcement Learning algorithms. One of the principal contributions of LA theory is that a set of decentralized, independent learning automata is able to control a finite Markov Chain with unknown transition probabilities and rewards. This result was recently extended to Markov Games and analyzed with the use of limiting games. In this paper we continue this analysis but we assume here that our agents are fully ignorant about the other agents in the environment, i.e. they can only observe themselves; they do not know how many other agents are present in the environment, the actions these other agents took, the rewards they received for this, or the location they occupy in the state space. We prove that in Markov Games, where agents have this limited type of observability, a network of independent LA is still able to converge to an equilibrium point of the underlying limiting game, provided a common ergodic assumption and provided the agents do not interfere each other's transition probabilities.