“Evolutionary” selection dynamics in games: convergence and limit properties
International Journal of Game Theory
Multiagent learning using a variable learning rate
Artificial Intelligence
Playing large games using simple strategies
Proceedings of the 4th ACM conference on Electronic commerce
The complexity of computing a Nash equilibrium
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
RVσ(t): a unifying approach to performance and convergence in online multiagent learning
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Settling the Complexity of Two-Player Nash Equilibrium
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
If multi-agent learning is the answer, what is the question?
Artificial Intelligence
Perspectives on multiagent learning
Artificial Intelligence
Approximating nash equilibria using small-support strategies
Proceedings of the 8th ACM conference on Electronic commerce
Progress in approximate nash equilibria
Proceedings of the 8th ACM conference on Electronic commerce
Computing an approximate jam/fold equilibrium for 3-player no-limit Texas Hold'em tournaments
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Mixed-integer programming methods for finding Nash equilibria
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Optimal Rhode Island Hold'em poker
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 4
New algorithms for approximate Nash equilibria in bimatrix games
WINE'07 Proceedings of the 3rd international conference on Internet and network economics
An optimization approach for approximate Nash equilibria
WINE'07 Proceedings of the 3rd international conference on Internet and network economics
A note on approximate nash equilibria
WINE'06 Proceedings of the Second international conference on Internet and Network Economics
Efficient algorithms for constant well supported approximate equilibria in bimatrix games
ICALP'07 Proceedings of the 34th international conference on Automata, Languages and Programming
How do you like your equilibrium selection problems? hard, or very hard?
SAGT'10 Proceedings of the Third international conference on Algorithmic game theory
On the rate of convergence of fictitious play
SAGT'10 Proceedings of the Third international conference on Algorithmic game theory
On the approximation performance of fictitious play in finite games
ESA'11 Proceedings of the 19th European conference on Algorithms
Hi-index | 0.00 |
Fictitious play is a simple, well-known, and often-used algorithm for playing (and, especially, learning to play) games. However, in general it does not converge to equilibrium; even when it does, we may not be able to run it to convergence. Still, we may obtain an approximate equilibrium. In this paper, we study the approximation properties that fictitious play obtains when it is run for a limited number of rounds. We show that if both players randomize uniformly over their actions in the first r rounds of fictitious play, then the result is an ε-equilibrium, where ε = (r+1)/(2r). (Since we are examining only a constant number of pure strategies, we know that ε r. We show how to obtain the optimal approximation guarantee when both the opponent's r and the game are adversarially chosen (but there is an upper bound R on the opponent's r), using a linear program formulation. We show that if the action played in the ith round of fictitious play is chosen with probability proportional to: 1 for i = 1 and 1/(i-1) for all 2 ≤ i ≤ R+1, this gives an approximation guarantee of 1-1/(2+lnR). We also obtain a lower bound of 1 - 4/ lnR. This provides an actionable prescription for how long to run fictitious play.