COLT '90 Proceedings of the third annual workshop on Computational learning theory
C4.5: programs for machine learning
C4.5: programs for machine learning
The weighted majority algorithm
Information and Computation
A game of prediction with expert advice
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Predicting Nearly As Well As the Best Pruning of a Decision Tree
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
Journal of the ACM (JACM)
Using and combining predictors that specialize
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
An efficient extension to mixture techniques for prediction and decision trees
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Efficient learning with virtual threshold gates
Information and Computation
Upward Planar Drawing of Single-Source AcyclicDigraphs
SIAM Journal on Computing
Predicting nearly as well as the best pruning of a planar decision graph
Theoretical Computer Science
The context-tree weighting method: basic properties
IEEE Transactions on Information Theory
Predicting nearly as well as the best pruning of a planar decision graph
Theoretical Computer Science
The Robustness of the p-Norm Algorithms
Machine Learning
Path kernels and multiplicative updates
The Journal of Machine Learning Research
The Difficulty of Reduced Error Pruning of Leveled Branching Programs
Annals of Mathematics and Artificial Intelligence
On approximating weighted sums with exponentially many terms
Journal of Computer and System Sciences
Efficient algorithms for online decision problems
Journal of Computer and System Sciences - Special issue: Learning theory 2003
Tracking the best of many experts
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 5.23 |
We design efficient on-line algorithms that predict nearly as well as the best pruning of a planar decision graph. We assume that the graph has no cycles. As in the previous work on decision trees, we implicitly maintain one weight for each of the prunings (exponentially many). The method works for a large class of algorithms that update its weights multiplicatively. It can also be used to design algorithms that predict nearly as well as the best convex combination of prunings.