Strong Entropy Concentration, Game Theory, and Algorithmic Randomness
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Universal Lossless Compression of Piecewise Stationary Slowly Varying Sources
DCC '01 Proceedings of the Data Compression Conference
On prediction using variable order Markov models
Journal of Artificial Intelligence Research
The relationship between causal and noncausal mismatched estimation in continuous-time AWGN channels
IEEE Transactions on Information Theory
Hi-index | 754.90 |
The capacity of the channel induced by a given class of sources is well known to be an attainable lower bound on the redundancy of universal codes with respect to this class, both in the minimax sense and in the Bayesian (maximin) sense. We show that this capacity is essentially a lower bound also in a stronger sense, that is, for “most” sources in the class. This result extends Rissanen's (1984, 1986) lower bound for parametric families. We demonstrate the applicability of this result in several examples, e.g., parametric families with growing dimensionality, piecewise-fixed sources, arbitrarily varying sources, and noisy samples of learnable functions. Finally, we discuss implications of our results to statistical inference