Probability theory for the Brier game
Theoretical Computer Science
Relative Loss Bounds for Multidimensional Regression Problems
Machine Learning
Tracking the best linear predictor
The Journal of Machine Learning Research
Prediction, Learning, and Games
Prediction, Learning, and Games
Theory and Applications of Models of Computation: Third International Conference, TAMC 2006, Beijing, China, May 15-20, 2006, Proceedings (Lecture Notes in Computer Science)
Competing with stationary prediction strategies
COLT'07 Proceedings of the 20th annual conference on Learning theory
Non-asymptotic calibration and resolution
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Defensive prediction with expert advice
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Defensive forecasting for linear protocols
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Predictions as statements and decisions
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Function classes that approximate the bayes risk
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Competing with wild prediction rules
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Complexity-based induction systems: Comparisons and convergence theorems
IEEE Transactions on Information Theory
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
Relative loss bounds for single neurons
IEEE Transactions on Neural Networks
Artificial Intelligence Review
Hi-index | 5.23 |
We start from a simple asymptotic result for the problem of on-line regression with the quadratic loss function: the class of continuous limited-memory prediction strategies admits a ''leading prediction strategy'', which not only asymptotically performs at least as well as any continuous limited-memory strategy, but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space, we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.