A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Sparse Regression Ensembles in Infinite and Finite Hypothesis Spaces
Machine Learning
Computational Statistics & Data Analysis - Nonlinear methods and data mining
Large Margin Methods for Structured and Interdependent Output Variables
The Journal of Machine Learning Research
A general regression technique for learning transductions
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning structured prediction models: a large margin approach
ICML '05 Proceedings of the 22nd international conference on Machine learning
Machine Learning
Kernelizing the output of tree-based methods
ICML '06 Proceedings of the 23rd international conference on Machine learning
Twin Gaussian Processes for Structured Prediction
International Journal of Computer Vision
Hi-index | 0.00 |
A general framework is proposed for gradient boosting in supervised learning problems where the loss function is defined using a kernel over the output space. It extends boosting in a principled way to complex output spaces (images, text, graphs etc.) and can be applied to a general class of base learners working in kernelized output spaces. Empirical results are provided on three problems: a regression problem, an image completion task and a graph prediction problem. In these experiments, the framework is combined with tree-based base learners, which have interesting algorithmic properties. The results show that gradient boosting significantly improves these base learners and provides competitive results with other tree-based ensemble methods based on randomization.