On a dynamical analysis of reinforcement learning in games: emergence of Occam's Razor

  • Authors:
  • Karl Tuyls;Katja Verbeeck;Sam Maes

  • Affiliations:
  • CoMo, Department of Computer Science, VUB Belgium;CoMo, Department of Computer Science, VUB Belgium;CoMo, Department of Computer Science, VUB Belgium

  • Venue:
  • CEEMAS'03 Proceedings of the 3rd Central and Eastern European conference on Multi-agent systems
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Modeling learning agents in the context of Multi-agent Systems requires an adequate understanding of their dynamic behaviour. Usually, these agents are modeled similar to the different players in a standard game theoretical model. Unfortunately traditional Game Theory is static and limited in its usefelness. Evolutionary Game Theory improves on this by providing a dynamics which describes how strategies evolve over time. In this paper, we discuss three learning models whose dynamics are related to the Replicator Dynamics(RD). We show how a classical Reinforcement Learning(RL) technique, i.e. Q-learning relates to the RD. This allows to better understand the learning process and it allows to determine how complex a RL model should be. More precisely, Occam's Razor applies in the framework of games, i.e. the simplest model (Cross) suffices for learning equilibria. An experimental verification in all three models is presented.