Unifying convergence and no-regret in multiagent learning

  • Authors:
  • Bikramjit Banerjee;Jing Peng

  • Affiliations:
  • Department of Electrical Engineering & Computer Science, Tulane University, New Orleans, LA;Department of Electrical Engineering & Computer Science, Tulane University, New Orleans, LA

  • Venue:
  • LAMAS'05 Proceedings of the First international conference on Learning and Adaption in Multi-Agent Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new multiagent learning algorithm, RVσ(t), that builds on an earlier version, ReDVaLeR . ReDVaLeR could guarantee (a) convergence to best response against stationary opponents and either (b) constant bounded regret against arbitrary opponents, or (c) convergence to Nash equilibrium policies in self-play. But it makes two strong assumptions: (1) that it can distinguish between self-play and otherwise non-stationary agents and (2) that all agents know their portions of the same equilibrium in self-play. We show that the adaptive learning rate of RVσ(t) that is explicitly dependent on time can overcome both of these assumptions. Consequently, RVσ(t) theoretically achieves (a') convergence to near-best response against eventually stationary opponents, (b') no-regret payoff against arbitrary opponents and (c') convergence to some Nash equilibrium policy in some classes of games, in self-play. Each agent now needs to know its portion of any equilibrium, and does not need to distinguish among non-stationary opponent types. This is also the first successful attempt (to our knowledge) at convergence of a no-regret algorithm in the Shapley game.