Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
COLT '90 Proceedings of the third annual workshop on Computational learning theory
The weighted majority algorithm
Information and Computation
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Journal of the ACM (JACM)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
A game of prediction with expert advice
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
TIGHT WORST-CASE LOSS BOUNDS FOR PREDICTING WITH EXPERT ADVICE
TIGHT WORST-CASE LOSS BOUNDS FOR PREDICTING WITH EXPERT ADVICE
Information and Computation
Using Kolmogorov complexity for understanding some limitations on steganography
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 4
Predictive complexity and generalized entropy rate of stationary ergodic processes
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Hi-index | 0.01 |
The problem of existence of predictive complexity for the absolute loss game is studied. Predictive complexity is a generalization of Kolmogorov complexity which bounds the ability of any algorithm to predict elements of a sequence of outcomes. For perfectly mixable loss functions (logarithmic and squared difference are among them) predictive complexity is defined like Kolmogorov complexity to within an additive constant. The absolute loss function is not perfectly mixable, and the question of existence of the corresponding predictive complexity, which is defined to within an additive constant, is open. We prove that in the case of the absolute loss game the predictive complexity can be defined to within an additive term O (), where n is the length of a sequence of outcomes. We prove also that in some restricted settings this bound cannot be improved.