Minimax regret under log loss for general classes of experts
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
On prediction of individual sequences relative to a set of experts in the presence of noise
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Worst-Case Bounds for the Logarithmic Loss of Predictors
Machine Learning
Texture Mixing and Texture Movie Synthesis Using Statistical Learning
IEEE Transactions on Visualization and Computer Graphics
Synthesizing Sound Textures through Wavelet Tree Learning
IEEE Computer Graphics and Applications
On asymptotically optimal methods of prediction and adaptive coding for Markov sources
Journal of Complexity
On-line Decision Making for a Class of Loss Functions via Lempel-Ziv Parsing
DCC '00 Proceedings of the Conference on Data Compression
Journal of Logic, Language and Information
The empirical Bayes envelope and regret minimization in competitive Markov decision processes
Mathematics of Operations Research
Note: fractal dimension and logarithmic loss unpredictability
Theoretical Computer Science
Using a Stochastic Complexity Measure to Check the Efficient Market Hypothesis
Computational Economics
Optimality of universal Bayesian sequence prediction for general loss and alphabet
The Journal of Machine Learning Research
Journal of Computer and System Sciences - Special issue on COLT 2002
Entropy-based bounds for online algorithms
ACM Transactions on Algorithms (TALG)
Regret Minimization Under Partial Monitoring
Mathematics of Operations Research
Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition
The Journal of Machine Learning Research
On calibration error of randomized forecasting algorithms
Theoretical Computer Science
Sequential prediction under incomplete feedback
Proceedings of the 2007 conference on Artificial Intelligence Research and Development
A joint information model for n-best ranking
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
A fast normalized maximum likelihood algorithm for multinomial data
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing
Sequential probability assignment via online convex programming using exponential families
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 2
Universal coding for distributions over co-trees
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 1
Discrete denoising with shifts
IEEE Transactions on Information Theory
Learning locally minimax optimal Bayesian networks
International Journal of Approximate Reasoning
Zero-rate feedback can achieve the empirical capacity
IEEE Transactions on Information Theory
Universal reinforcement learning
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
On supervised selection of Bayesian networks
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
High-confidence predictions under adversarial uncertainty
Proceedings of the 3rd Innovations in Theoretical Computer Science Conference
A randomized online learning algorithm for better variance control
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Predictive complexity and generalized entropy rate of stationary ergodic processes
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
High-confidence predictions under adversarial uncertainty
ACM Transactions on Computation Theory (TOCT) - Special issue on innovations in theoretical computer science 2012
Online portfolio selection: A survey
ACM Computing Surveys (CSUR)
Hi-index | 755.08 |
This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings