Adaptive Learning in Systems of Interacting Agents

  • Authors:
  • H. Peyton Young

  • Affiliations:
  • University of Oxford,

  • Venue:
  • WINE '09 Proceedings of the 5th International Workshop on Internet and Network Economics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A learning rule is adaptive if it is simple to compute, requires little information about the actions of others, and is plausible as a model of behavior [1, 2]. In this paper I survey a family of adaptive learning rules in which experimentation plays a key role. These rules have the property that, in large classes of games, agents' individual behavior results in Nash equilibrium behavior by the group a high proportion of the time. Agents need not know that Nash equilibrium is being played --- indeed they need not know anything about the structure of the game in which they are embedded. Instead, equilibrium evolves as an unintended consequence of individual adaptation. The theory is particularly relevant to modeling systems of interacting agents that are very large and complex, so that one cannot reasonably expect that players would try to optimize based on their beliefs about the state of the system. Concrete examples include drivers adjusting to urban traffic patterns, or people sending and receiving information in large networks. While such rules can be viewed as a descriptive model of how humans adapt in such situations, they can also be taken as design elements in engineered systems, such as distributed sensors or robots, where the `agents' are programmed to behave in a way that leads to desirable system-wide outcomes.