COLT '90 Proceedings of the third annual workshop on Computational learning theory
The weighted majority algorithm
Information and Computation
A game of prediction with expert advice
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Predicting Nearly As Well As the Best Pruning of a Decision Tree
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
Journal of the ACM (JACM)
Using and combining predictors that specialize
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
An efficient extension to mixture techniques for prediction and decision trees
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Efficient learning with virtual threshold gates
Information and Computation
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Direct and Indirect Algorithms for On-line Learning of Disjunctions
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
The context-tree weighting method: basic properties
IEEE Transactions on Information Theory
Efficiently Approximating Weighted Sums with Exponentially Many Terms
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Path Kernels and Multiplicative Updates
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Hi-index | 0.00 |
We design efficient on-line algorithms that predict nearly as well as the best pruning of a planar decision graph. We assume that the graph has no cycles. As in the previous work on decision trees, we implicitly maintain one weight for each of the prunings (exponentially many). The method works for a large class of algorithms that update its weights multiplicatively. It can also be used to design algorithms that predict nearly as well as the best convex combination of prunings.