Settling the complexity of computing two-player Nash equilibria
Journal of the ACM (JACM)
The Complexity of Computing a Nash Equilibrium
SIAM Journal on Computing
On the Inefficiency Ratio of Stable Equilibria in Congestion Games
WINE '09 Proceedings of the 5th International Workshop on Internet and Network Economics
Convergence to Equilibrium in Local Interaction Games
FOCS '09 Proceedings of the 2009 50th Annual IEEE Symposium on Foundations of Computer Science
Mixing time and stationary expected social welfare of logit dynamics
SAGT'10 Proceedings of the Third international conference on Algorithmic game theory
Convergence to equilibrium of logit dynamics for strategic games
Proceedings of the twenty-third annual ACM symposium on Parallelism in algorithms and architectures
Stability and metastability of the logit dynamics of strategic games
FUN'12 Proceedings of the 6th international conference on Fun with Algorithms
Decentralized dynamics for finite opinion games
SAGT'12 Proceedings of the 5th international conference on Algorithmic Game Theory
Logit dynamics: a model for bounded rationality
ACM SIGecom Exchanges
Hi-index | 0.00 |
Logit Dynamics [Blume, Games and Economic Behavior, 1993] is a randomized best response dynamics for strategic games: at every time step a player is selected uniformly at random and she chooses a new strategy according to a probability distribution biased toward strategies promising higher payoffs. This process defines an ergodic Markov chain, over the set of strategy profiles of the game, whose unique stationary distribution is the long-term equilibrium concept for the game. However, when the mixing time of the chain is large (e.g., exponential in the number of players), the stationary distribution loses its appeal as equilibrium concept, and the transient phase of the Markov chain becomes important. In several cases it happens that on a time-scale shorter than mixing time the chain is "quasi-stationary", meaning that it stays close to some small set of the state space, while in a time-scale multiple of the mixing time it jumps from one quasi-stationary configuration to another; this phenomenon is usually called "metastability". In this paper we give a quantitative definition of "metastable probability distributions" for a Markov chain and we study the metastability of the Logit dynamics for some classes of coordination games. In particular, we study no-risk-dominant coordination games on the clique (which is equivalent to the well-known Glauber dynamics for the Ising model) and coordination games on a ring (both the risk-dominant and no-risk-dominant case). We also describe a simple "artificial" game that highlights the distinctive features of our metastability notion based on distributions.