Probability (2nd ed.)
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
A game of prediction with expert advice
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
Suboptimal measures of predictive complexity for absolute loss function
Information and Computation
Predictive Complexity and Information
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Loss functions, complexities, and the legendre transformation
Theoretical Computer Science - Special issue: Algorithmic learning theory
Journal of Computer and System Sciences - Special issue on COLT 2002
Prediction, Learning, and Games
Prediction, Learning, and Games
An Introduction to Kolmogorov Complexity and Its Applications
An Introduction to Kolmogorov Complexity and Its Applications
Generalised entropy and asymptotic complexities of languages
COLT'07 Proceedings of the 20th annual conference on Learning theory
Supermartingales in prediction with expert advice
Theoretical Computer Science
IEEE Transactions on Information Theory
Gambling using a finite state machine
IEEE Transactions on Information Theory
Universal prediction of individual sequences
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In the online prediction framework, we use generalized entropy to study the loss rate of predictors when outcomes are drawn according to stationary ergodic distributions over the binary alphabet. We show that the notion of generalized entropy of a regular game [11] is well-defined for stationary ergodic distributions. In proving this, we obtain new game-theoretic proofs of some classical information theoretic inequalities. Using Birkhoff's ergodic theorem and convergence properties of conditional distributions, we prove that a generalization of the classical Shannon-McMillan-Breiman theorem holds for a restricted class of regular games, when no computational constraints are imposed on the prediction strategies. If a game is mixable, then there is an optimal aggregating strategy which loses at most an additive constant when compared to any other lower semicomputable strategy. The loss incurred by this algorithm on an infinite sequence of outcomes is called its predictive complexity. We prove that when a restricted regular game has a predictive complexity, the average predictive complexity converges to the generalized entropy of the game almost everywhere with respect to the stationary ergodic distribution.