Estimating Entropy Rates with Bayesian Confidence Intervals
Neural Computation
Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition
The Journal of Machine Learning Research
Fast Bayesian Haplotype Inference Via Context Tree Weighting
WABI '08 Proceedings of the 8th international workshop on Algorithms in Bioinformatics
CHES '09 Proceedings of the 11th International Workshop on Cryptographic Hardware and Embedded Systems
On prediction using variable order Markov models
Journal of Artificial Intelligence Research
Universal estimation of erasure entropy
IEEE Transactions on Information Theory
Reducing the space complexity of a Bayes coding algorithm using an expanded context tree
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 2
Unrestricted BIC context tree estimation for not necessarily finite memory processes
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 2
Hardware intrinsic security from D flip-flops
Proceedings of the fifth ACM workshop on Scalable trusted computing
Improved context-based adaptive binary arithmetic coding in MPEG-4 AVC/H.264 video codec
ICCVG'10 Proceedings of the 2010 international conference on Computer vision and graphics: Part II
Communications of the ACM
A Monte-Carlo AIXI approximation
Journal of Artificial Intelligence Research
A suffix tree based prediction scheme for pervasive computing environments
PCI'05 Proceedings of the 10th Panhellenic conference on Advances in Informatics
Improved adaptive arithmetic coding for HEVC video compression technology
ICCVG'12 Proceedings of the 2012 international conference on Computer Vision and Graphics
Hi-index | 754.91 |
First we modify the basic (binary) context-tree weighting method such that the past symbols x1-D, x2-D, …, x 0 are not needed by the encoder and the decoder. Then we describe how to make the context-tree depth D infinite, which results in optimal redundancy behavior for all tree sources, while the number of records in the context tree is not larger than 2T-1. Here T is the length of the source sequence. For this extended context-tree weighting algorithm we show that with probability one the compression ratio is not larger than the source entropy for source sequence length T→∞ for stationary and ergodic sources