Moderated innovations in self-poised ensemble learning

  • Authors:
  • Ricardo Ñanculef;Carlos Valle;Héctor Allende;Claudio Moraga

  • Affiliations:
  • Departamento de Informática, Universidad Técnica Federico Santa María, Valparaíso, Chile;Departamento de Informática, Universidad Técnica Federico Santa María, Valparaíso, Chile;Departamento de Informática, Universidad Técnica Federico Santa María, Valparaíso, Chile;Dortmund University, Dortmund, Germany

  • Venue:
  • CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Self-poised ensemble learning is based on the idea of introducing an artificial innovation to the map to be predicted by each machine in the ensemble such that it compensates the error incurred by the previous one. We will show that this approach is equivalent to regularize the loss function used to train each machine with a penalty term which measures decorrelation with previous machines. Although the algorithm is competitive in practice, it is also observed that the innovations tend to generate an increasedly bad behavior of individual learners in time, damaging the ensemble performance. To avoid this, we propose to incorporate smoothing parameters which control the introduced level of innovation and can be characterized to avoid an explosive behavior of the algorithm. Our experimental results report the behavior of neural networks ensembles trained with the proposed algorithm in two real and well-known data sets.