The Strength of Weak Learnability
Machine Learning
Machine Learning
Ensemble learning via negative correlation
Neural Networks
Generating classifier outputs of fixed accuracy and diversity
Pattern Recognition Letters
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
IDA'05 Proceedings of the 6th international conference on Advances in Intelligent Data Analysis
Hi-index | 0.00 |
Self-poised ensemble learning is based on the idea of introducing an artificial innovation to the map to be predicted by each machine in the ensemble such that it compensates the error incurred by the previous one. We will show that this approach is equivalent to regularize the loss function used to train each machine with a penalty term which measures decorrelation with previous machines. Although the algorithm is competitive in practice, it is also observed that the innovations tend to generate an increasedly bad behavior of individual learners in time, damaging the ensemble performance. To avoid this, we propose to incorporate smoothing parameters which control the introduced level of innovation and can be characterized to avoid an explosive behavior of the algorithm. Our experimental results report the behavior of neural networks ensembles trained with the proposed algorithm in two real and well-known data sets.