Neural networks and the bias/variance dilemma
Neural Computation
Machine Learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Ensemble learning via negative correlation
Neural Networks
Machine Learning
Self Organized Partitioning of Chaotic Attractors for Control
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Negative correlation in incremental learning
Natural Computing: an international journal
Effective pruning method for a multiple classifier system based on self-generating neural networks
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Hi-index | 0.00 |
In this work, we focus on the problem of training ensembles or, more generally, a set of self-organizing maps (SOMs). In the light of new theory behind ensemble learning, in particular negative correlation learning (NCL), the question arises if SOM ensemble learning can benefit from non-independent learning when the individual learning stages are interlinked by a term penalizing correlation in errors. We can show that SOMs are well suited as weak ensemble components with a small number of neurons. Using our approach, we obtain efficiently trained SOM ensembles outperforming other reference learners. Due to the transparency of SOMs, we can give insights into the interrelation between diversity and sublocal accuracy inside SOMs. We are able to shed light on the diversity arising over a combination of several factors: explicit versus implicit as well as inter-diversities versus intra-diversities. NCL fully exploits the potential of SOM ensemble learning when the single neural networks co-operate at the highest level and stability is satisfied. The reported quantified diversities exhibit high correlations to the prediction performance.