Gradient boosting for kernelized output spaces

  • Authors:
  • Pierre Geurts;Louis Wehenkel;Florence d'Alché-Buc

  • Affiliations:
  • University of Evry, Evry, France and University of Liège, Liège, Belgium;University of Liège, Liège, Belgium;University of Evry, Evry, France

  • Venue:
  • Proceedings of the 24th international conference on Machine learning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

A general framework is proposed for gradient boosting in supervised learning problems where the loss function is defined using a kernel over the output space. It extends boosting in a principled way to complex output spaces (images, text, graphs etc.) and can be applied to a general class of base learners working in kernelized output spaces. Empirical results are provided on three problems: a regression problem, an image completion task and a graph prediction problem. In these experiments, the framework is combined with tree-based base learners, which have interesting algorithmic properties. The results show that gradient boosting significantly improves these base learners and provides competitive results with other tree-based ensemble methods based on randomization.