Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
An introduction to variational methods for graphical models
Learning in graphical models
Minimum-Entropy Data Partitioning Using Reversible Jump Markov Chain Monte Carlo
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear Programming Boosting via Column Generation
Machine Learning
Graphical Models for Game Theory
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Product Distribution Theory for Control of Multi-Agent Systems
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 2
Probabilistic Modelling in Bioinformatics and Medical Informatics
Probabilistic Modelling in Bioinformatics and Medical Informatics
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
A Fictitious Play Approach to Large-Scale Optimization
Operations Research
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
If multi-agent learning is the answer, what is the question?
Artificial Intelligence
The evidence framework applied to classification networks
Neural Computation
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Predicting behavior in unstructured bargaining with a probability distribution
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution).