An optimal-control application of two paradigms of on-line learning
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Simulating access to hidden information while learning
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Method combination for document filtering
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
On the Accuracy of Meta-learning for Scalable Data Mining
Journal of Intelligent Information Systems
Regret bounds for prediction problems
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Experimental Evaluation of a Trainable Scribble Recognizer for Calligraphic Interfaces
GREC '01 Selected Papers from the Fourth International Workshop on Graphics Recognition Algorithms and Applications
A refinement approach to handling model misfit in text categorization
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Computational Linguistics - Special issue on web as corpus
Adaptive model weighting and transductive regression for predicting best system combinations
WMT '10 Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
Hi-index | 0.00 |
We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case that the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We call this method the Weighted Majority Algorithm. We show that this algorithm is robust in the presence of errors in the data. We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool. For example, given a sequence of trials, if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c(log |A| + m) mistakes on that sequence, where c is fixed constant.