Optimal speedup of Las Vegas algorithms
Information Processing Letters
Lipschitzian optimization without the Lipschitz constant
Journal of Optimization Theory and Applications
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
The Nonstochastic Multiarmed Bandit Problem
SIAM Journal on Computing
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
On the Boosting Pruning Problem
ECML '00 Proceedings of the 11th European Conference on Machine Learning
Boosting Applied toe Word Sense Disambiguation
ECML '00 Proceedings of the 11th European Conference on Machine Learning
The Alternating Decision Tree Learning Algorithm
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
An Efficient Boosting Algorithm for Combining Preferences
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the local optimality of LambdaRank
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Achieving master level play in 9×9 computer go
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Adapting boosting for information retrieval measures
Information Retrieval
Efficient multi-start strategies for local search algorithms
Journal of Artificial Intelligence Research
A simple distribution-free approach to the max k-armed bandit problem
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Hi-index | 0.00 |
Boosting algorithms have been found successful in many areas of machine learning and, in particular, in ranking. For typical classes of weak learners used in boosting (such as decision stumps or trees), a large feature space can slow down the training, while a long sequence of weak hypotheses combined by boosting can result in a computationally expensive model. In this paper we propose a strategy that builds several sequences of weak hypotheses in parallel, and extends the ones that are likely to yield a good model. The weak hypothesis sequences are arranged in a boosting tree, and new weak hypotheses are added to promising nodes (both leaves and inner nodes) of the tree using some randomized method. Theoretical results show that the proposed algorithm asymptotically achieves the performance of the base boosting algorithm applied. Experiments are provided in ranking web documents and move ordering in chess, and the results indicate that the new strategy yields better performance when the length of the sequence is limited, and converges to similar performance as the original boosting algorithms otherwise.