Text compression
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Elements of information theory
Elements of information theory
The power of amnesia: learning probabilistic automata with variable memory length
Machine Learning - Special issue on COLT '94
Predicting Nearly As Well As the Best Pruning of a Decision Tree
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
Alan Turing: The Enigma
On the Convergence Rate of Good-Turing Estimators
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Predicting a binary sequence almost as well as the optimal biased coin
Information and Computation
Context-Tree Weighting Method for Text Generating Sources
DCC '97 Proceedings of the Conference on Data Compression
A Simple Technique for Bounding the Pointwise Redundancy of the 1978 Lempel-Ziv Algorithm
DCC '99 Proceedings of the Conference on Data Compression
Implementing the Context Tree Weighting Method for Text Compression
DCC '00 Proceedings of the Conference on Data Compression
Experiments on the zero frequency problem
DCC '95 Proceedings of the Conference on Data Compression
DCC '02 Proceedings of the Data Compression Conference
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
Redundancy estimates for the Lempel-Ziv algorithm of data compression
Discrete Applied Mathematics
An empirical study of smoothing techniques for language modeling
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
On prediction using variable order Markov models
Journal of Artificial Intelligence Research
Redundancy of the Lempel-Ziv incremental parsing rule
IEEE Transactions on Information Theory
The context-tree weighting method: extensions
IEEE Transactions on Information Theory
Sequential prediction of individual sequences under general loss functions
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Asymptotic minimax regret for data compression, gambling, and prediction
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Universal lossless source coding with the Burrows Wheeler transform
IEEE Transactions on Information Theory
Markov types and minimax redundancy for Markov sources
IEEE Transactions on Information Theory
The context-tree weighting method: basic properties
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We present worst case bounds for the learning rate of a known prediction method that is based on hierarchical applications of binary context tree weighting (CTW) predictors. A heuristic application of this approach that relies on Huffman's alphabet decomposition is known to achieve state-of-the-art performance in prediction and lossless compression benchmarks. We show that our new bound for this heuristic is tighter than the best known performance guarantees for prediction and lossless compression algorithms in various settings. This result substantiates the efficiency of this hierarchical method and provides a compelling explanation for its practical success. In addition, we present the results of a few experiments that examine other possibilities for improving the multi-alphabet prediction performance of CTW-based algorithms.