An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
New error bounds for Solomonoff prediction
Journal of Computer and System Sciences
General Loss Bounds for Universal Sequence Prediction
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Algorithmic Theories of Everything
Algorithmic Theories of Everything
Optimality of universal Bayesian sequence prediction for general loss and alphabet
The Journal of Machine Learning Research
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Sequential predictions based on algorithmic complexity
Journal of Computer and System Sciences
Monotone conditional complexity bounds on future prediction errors
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Complexity-based induction systems: Comparisons and convergence theorems
IEEE Transactions on Information Theory
Convergence and loss bounds for Bayesian sequence prediction
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
On calibration error of randomized forecasting algorithms
Theoretical Computer Science
Hi-index | 0.00 |
We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution @m by the algorithmic complexity of @m. Here we assume that we are at a time t1 and have already observed x=x"1...x"t. We bound the future prediction performance on x"t"+"1x"t"+"2... by a new variant of algorithmic complexity of @m given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.