Looking inside self-organizing map ensembles with resampling and negative correlation learning

  • Authors:
  • Alexandra Scherbart;Tim W. Nattkemper

  • Affiliations:
  • -;-

  • Venue:
  • Neural Networks
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work, we focus on the problem of training ensembles or, more generally, a set of self-organizing maps (SOMs). In the light of new theory behind ensemble learning, in particular negative correlation learning (NCL), the question arises if SOM ensemble learning can benefit from non-independent learning when the individual learning stages are interlinked by a term penalizing correlation in errors. We can show that SOMs are well suited as weak ensemble components with a small number of neurons. Using our approach, we obtain efficiently trained SOM ensembles outperforming other reference learners. Due to the transparency of SOMs, we can give insights into the interrelation between diversity and sublocal accuracy inside SOMs. We are able to shed light on the diversity arising over a combination of several factors: explicit versus implicit as well as inter-diversities versus intra-diversities. NCL fully exploits the potential of SOM ensemble learning when the single neural networks co-operate at the highest level and stability is satisfied. The reported quantified diversities exhibit high correlations to the prediction performance.