Multi-agent learning in extensive games with complete information

  • Authors:
  • Pu Huang;Katia Sycara

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA;Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning in a multi-agent system is difficult because the learning environment jointly created by all learning agents is time-variant. This paper studies the model of multi-agent learning in complete-information extensive games (CEGs). We provide two provably convergent algorithms for this model. Both algorithms utilize the special structure of CEGs and guarantee both individual and collective convergence. Our work contributes to the multi-agent learning literature in several aspects: 1. We identify a model of multi-agent learning, namely, learning in CEGs, and provide two provably convergent algorithms for this model. 2. We explicitly address the environment-shifting problem and show that how patient agents can collectively learn to play equilibrium strategies. 3. Many game-theoretical work on learning uses a technique called fictitious play, which requires agents to build beliefs about their opponents. For our model of learning in CEGs, we show it is true that agents can collectively converge to the sub-game perfect equilibrium (SPE) by repeatedly reinforcing their previous success/failure experience; no belief building is necessary.