COLT '90 Proceedings of the third annual workshop on Computational learning theory
The weighted majority algorithm
Information and Computation
Probability (2nd ed.)
On-line learning of linear functions
Computational Complexity
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
A game of prediction with expert advice
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
Machine Learning - Special issue on context sensitivity and concept drift
Competitive on-line linear regression
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Derandomizing Stochastic Prediction Strategies
Machine Learning - Special issue: computational learning theory, COLT '97
The Art of Causal Conjecture
Universal portfolios with side information
IEEE Transactions on Information Theory
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
General linear relations between different types of predictive complexity
Theoretical Computer Science
On-line prediction with kernels and the complexity approximation principle
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Leading strategies in competitive on-line prediction
Theoretical Computer Science
Hi-index | 5.23 |
The usual theory of prediction with expert advice does not differentiate between good and bad "experts": its typical results only assert that it is possible to efficiently merge not too extensivepools of experts, no matter how good or how bad they are. On the other hand, it is natural toexpect that good experts' predictions will in some way agree with the actual outcomes (e.g., theywill be accurate on the average). In this paper we show that, in the case of the Brier predictiongame (also known as the square-loss game), the predictions of a good (in some weak andnatural sense) expert must satisfy the law of large numbers (both strong and weak) and the lawof the iterated logarithm; we also show that two good experts' predictions must be in asymtoticagreement. To help the reader's intuition, we give a Kolmogorov-complexity interpretation ofour results. Finally, we briefly discuss possible extensions of our results to more general games;the limit theorems for sequences of events in conventional probability theory correspond to thelog-loss game.