The Nonstochastic Multiarmed Bandit Problem
SIAM Journal on Computing
The communication complexity of uncoupled nash equilibrium procedures
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Regret minimization and the price of total anarchy
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Intrinsic robustness of the price of anarchy
Proceedings of the forty-first annual ACM symposium on Theory of computing
Multiplicative updates outperform generic no-regret learning in congestion games: extended abstract
Proceedings of the forty-first annual ACM symposium on Theory of computing
On the approximation performance of fictitious play in finite games
ESA'11 Proceedings of the 19th European conference on Algorithms
Learning equilibria of games via payoff queries
Proceedings of the fourteenth ACM conference on Electronic commerce
International Journal of Ad Hoc and Ubiquitous Computing
Hi-index | 0.00 |
Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or nonconvergence properties of such dynamics may inform our understanding of the applicability of Nash equilibria as a plausible solution concept in some settings. A second reason for asking this question is in the hope of being able to prove an impossibility result, not dependent on complexity assumptions, for computing Nash equilibria via a restricted class of reasonable algorithms. In this work, we begin to answer this question by considering the dynamics of the standard multiplicative weights update learning algorithms (which are known to converge to a Nash equilibrium for zero-sum games). We revisit a 3×3 game defined by Shapley [10] in the 1950s in order to establish that fictitious play does not converge in general games. For this simple game, we show via a potential function argument that in a variety of settings the multiplicative updates algorithm impressively fails to find the unique Nash equilibrium, in that the cumulative distributions of players produced by learning dynamics actually drift away from the equilibrium.