COLT '90 Proceedings of the third annual workshop on Computational learning theory
The weighted majority algorithm
Information and Computation
Predicting Nearly As Well As the Best Pruning of a Decision Tree
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
Journal of the ACM (JACM)
Machine Learning - Special issue on context sensitivity and concept drift
Machine Learning - Special issue on context sensitivity and concept drift
Derandomizing Stochastic Prediction Strategies
Machine Learning - Special issue: computational learning theory, COLT '97
Predicting nearly as well as the best pruning of a planar decision graph
Theoretical Computer Science
Path Kernels and Multiplicative Updates
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Tracking the best linear predictor
The Journal of Machine Learning Research
Tracking a small set of experts by mixing past posteriors
The Journal of Machine Learning Research
Path kernels and multiplicative updates
The Journal of Machine Learning Research
Efficient adaptive algorithms and minimax bounds for zero-delay lossy source coding
IEEE Transactions on Signal Processing
Following the Perturbed Leader to Gamble at Multi-armed Bandits
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Combining initial segments of lists
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
The shortest path problem under partial monitoring
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Combining initial segments of lists
Theoretical Computer Science
Hi-index | 0.00 |
An algorithm is presented for online prediction that allows to track the best expert efficiently even if the number of experts is exponentially large, provided that the set of experts has a certain structure allowing efficient implementations of the exponentially weighted average predictor. As an example we work out the case where each expert is represented by a path in a directed graph and the loss of each expert is the sum of the weights over the edges in the path.