COLT '90 Proceedings of the third annual workshop on Computational learning theory
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
PAC-Bayesian Stochastic Model Selection
Machine Learning
Potential-Based Algorithms in On-Line Prediction and Game Theory
Machine Learning
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Adaptive Online Prediction by Following the Perturbed Leader
The Journal of Machine Learning Research
The weighted majority algorithm
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
Fisher information and stochastic complexity
IEEE Transactions on Information Theory
Convergence and loss bounds for Bayesian sequence prediction
IEEE Transactions on Information Theory
Asymptotics of discrete MDL for online prediction
IEEE Transactions on Information Theory
On the consistency of discrete Bayesian learning
STACS'07 Proceedings of the 24th annual conference on Theoretical aspects of computer science
Hi-index | 0.00 |
Bayes' rule specifies how to obtain a posterior from a class of hypotheses endowed with a prior and the observed data. There are three principle ways to use this posterior for predicting the future: marginalization (integration over the hypotheses w.r.t. the posterior), MAP (taking the a posteriori most probable hypothesis), and stochastic model selection (selecting a hypothesis at random according to the posterior distribution). If the hypothesis class is countable and contains the data generating distribution, strong consistency theorems are known for the former two methods, asserting almost sure convergence of the predictions to the truth as well as loss bounds. We prove the first corresponding results for stochastic model selection. As a main technical tool, we will use the concept of a potential: this quantity, which is always positive, measures the total possible amount of future prediction errors. Precisely, in each time step, the expected potential decrease upper bounds the expected error. We introduce the entropy potential of a hypothesis class as its worst-case entropy with regard to the true distribution. We formulate our results in the online classification framework, but they are equally applicable to the prediction of non-i.i.d. sequences.