New Results on Combination Methods for Boosting Ensembles

  • Authors:
  • Joaquín Torres-Sospedra;Carlos Hernández-Espinosa;Mercedes Fernández-Redondo

  • Affiliations:
  • Departamento de Ingenieria y Ciencia de los Computadores, Universitat Jaume I, Castellon, Spain C.P. 12071;Departamento de Ingenieria y Ciencia de los Computadores, Universitat Jaume I, Castellon, Spain C.P. 12071;Departamento de Ingenieria y Ciencia de los Computadores, Universitat Jaume I, Castellon, Spain C.P. 12071

  • Venue:
  • ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The design of an ensemble of neural networks is a procedure that can be decomposed into two steps. The first one consists in generating the ensemble, i.e., training the networks with significant differences. The second one consists in combining properly the information provided by the networks. Adaptive Boosting, one of the best performing ensemble methods, has been studied and improved by some authors including us. Moreover, Adaboostand its variantsuse a specific combiner based on the error of the networks. Unfortunately, any deep study on combining this kind of ensembles has not been done yet. In this paper, we study the performance of some important ensemble combiners on ensembles previously trained with Adaboostand Aveboost. The results show that an extra increase of performance can be provided by applying the appropriate combiner.