Probabilistic aggregation of classifiers for incremental learning

  • Authors:
  • Patricia Trejo;Ricardo Ñanculef;Héctor Allende;Claudio Moraga

  • Affiliations:
  • Universidad Técnica Federico Santa María, Departamento de Informática, Valparaíso, Chile;Universidad Técnica Federico Santa María, Departamento de Informática, Valparaíso, Chile;Universidad Técnica Federico Santa María, Departamento de Informática, Valparaíso, Chile;European Centre for Soft Computing, Mieres, Asturias, Spain and Dortmund University, Dortmund, Germany

  • Venue:
  • IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We work with a recently proposed algorithm where an ensemble of base classifiers, combined using weighted majority voting, is used for incremental classification of data. To successfully accommodate novel information without compromising previously acquired knowledge this algorithm requires an adequate strategy to determine the voting weights. Given an instance to classify, we propose to define each voting weight as the posterior probability of the corresponding hypothesis given the instance. By operating with priors and the likelihood models the obtained weights can take into account the location of the instance in the different class-specific feature spaces but also the coverage of each class k given the classifier and the quality of the learned hypothesis. This approach can provide important improvements in the generalization performance of the resulting classifier and its ability to control the stability/plasticity tradeoff. Experiments are carried out with three real classification problems already introduced to test incremental algorithms.