Reinforcement Learning Rules in a Repeated Game

  • Authors:
  • Ann Maria Bell

  • Affiliations:
  • Orbital Sciences, NASA Ames Research Center, Mail Stop 239, Moffett Field, CA 94035-1000, USA/ E-mail: abell@mail.arc.nasa.gov

  • Venue:
  • Computational Economics
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper examines the performance of simple reinforcement learningalgorithms in a stationary environment and in a repeated game where theenvironment evolves endogenously based on the actions of other agents. Sometypes of reinforcement learning rules can be extremely sensitive to smallchanges in the initial conditions, consequently, events early in a simulationcan affect the performance of the rule over a relatively long time horizon.However, when multiple adaptive agents interact, algorithms that performedpoorly in a stationary environment often converge rapidly to a stableaggregate behaviors despite the slow and erratic behavior of individuallearners. Algorithms that are robust in stationary environments can exhibitslow convergence in an evolving environment.