Approximation guarantees for fictitious play

  • Authors:
  • Vincent Conitzer

  • Affiliations:
  • Department of Computer Science, Duke Unversity, Durham, NC

  • Venue:
  • Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Fictitious play is a simple, well-known, and often-used algorithm for playing (and, especially, learning to play) games. However, in general it does not converge to equilibrium; even when it does, we may not be able to run it to convergence. Still, we may obtain an approximate equilibrium. In this paper, we study the approximation properties that fictitious play obtains when it is run for a limited number of rounds. We show that if both players randomize uniformly over their actions in the first r rounds of fictitious play, then the result is an ε-equilibrium, where ε = (r+1)/(2r). (Since we are examining only a constant number of pure strategies, we know that ε r. We show how to obtain the optimal approximation guarantee when both the opponent's r and the game are adversarially chosen (but there is an upper bound R on the opponent's r), using a linear program formulation. We show that if the action played in the ith round of fictitious play is chosen with probability proportional to: 1 for i = 1 and 1/(i-1) for all 2 ≤ i ≤ R+1, this gives an approximation guarantee of 1-1/(2+lnR). We also obtain a lower bound of 1 - 4/ lnR. This provides an actionable prescription for how long to run fictitious play.