COLT '90 Proceedings of the third annual workshop on Computational learning theory
Learning probabilistic prediction functions
COLT '88 Proceedings of the first annual workshop on Computational learning theory
The weighted majority algorithm
Information and Computation
Journal of the ACM (JACM)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
A game of prediction with expert advice
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
Probability theory for the Brier game
Theoretical Computer Science
TIGHT WORST-CASE LOSS BOUNDS FOR PREDICTING WITH EXPERT ADVICE
TIGHT WORST-CASE LOSS BOUNDS FOR PREDICTING WITH EXPERT ADVICE
On the Absence of Predictive Complexity for Some Games
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Hi-index | 5.23 |
In this paper we introduce a general method of establishing tight linear inequalities between different types of predictive complexity. Predictive complexity is a generalization of Kolmogorov complexity and it bounds the ability of an algorithm to predict elements of a sequence. Our method relies upon probabilistic considerations and allows us to describe explicitly the sets of coefficients which correspond to true inequalities. We apply this method to two particular types of predictive complexity, namely, logarithmic complexity, which coincides with a variant of Kolmogorov complexity, and square-loss complexity, which is interesting for applications.