Ulam's searching game with a fixed number of lies
Theoretical Computer Science
The weighted majority algorithm
Information and Computation
Boosting a weak learning algorithm by majority
Information and Computation
On-line prediction and conversion strategies
Machine Learning
Journal of the ACM (JACM)
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning
Continuous experts and the binning algorithm
COLT'06 Proceedings of the 19th annual conference on Learning Theory
A theory of multiclass boosting
The Journal of Machine Learning Research
Hi-index | 5.23 |
We consider the problem of learning to predict as well as the best in a group of experts making continuous predictions. We assume the learning algorithm has prior knowledge of the maximum number of mistakes of the best expert. We propose a new master strategy that achieves the best known performance for on-line learning with continuous experts in the mistake bounded model. Our ideas are based on drifting games, a generalization of boosting and on-line learning algorithms. We prove new lower bounds based on the drifting games framework which, though not as tight as previous bounds, have simpler proofs and do not require an enormous number of experts. We also extend previous lower bounds to show that our upper bounds are exactly tight for sufficiently many experts. A surprising consequence of our work is that continuous experts are only as powerful as experts making binary or no prediction in each round.