The Strength of Weak Learnability
Machine Learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Machine Learning
Self-organizing maps
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Ensemble learning via negative correlation
Neural Networks
Machine Learning
Machine Learning
Introduction to Evolutionary Computing
Introduction to Evolutionary Computing
Discovering Knowledge in Data: An Introduction to Data Mining
Discovering Knowledge in Data: An Introduction to Data Mining
Design of Experiments in Neuro-Fuzzy Systems
HIS '05 Proceedings of the Fifth International Conference on Hybrid Intelligent Systems
An analysis of diversity measures
Machine Learning
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Effective pruning method for a multiple classifier system based on self-generating neural networks
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Learn++: an incremental learning algorithm for supervised neuralnetworks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Simultaneous training of negatively correlated neural networks inan ensemble
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Evolving fuzzy neural networks for supervised/unsupervised onlineknowledge-based learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A constructive algorithm for training cooperative neural network ensembles
IEEE Transactions on Neural Networks
Using diversity to handle concept drift in on-line learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Evolving, training and designing neural network ensembles
INES'10 Proceedings of the 14th international conference on Intelligent engineering systems
Incremental learning by heterogeneous bagging ensemble
ADMA'10 Proceedings of the 6th international conference on Advanced data mining and applications - Volume Part II
An instance-window based classification algorithm for handling gradual concept drifts
ADMI'11 Proceedings of the 7th international conference on Agents and Data Mining Interaction
Hi-index | 0.00 |
Negative Correlation Learning (NCL) has been successfully applied to construct neural network ensembles. It encourages the neural networks that compose the ensemble to be different from each other and, at the same time, accurate. The difference among the neural networks that compose an ensemble is a desirable feature to perform incremental learning, for some of the neural networks can be able to adapt faster and better to new data than the others. So, NCL is a potentially powerful approach to incremental learning. With this in mind, this paper presents an analysis of NCL, aiming at determining its weak and strong points to incremental learning. The analysis shows that it is possible to use NCL to overcome catastrophic forgetting, an important problem related to incremental learning. However, when catastrophic forgetting is very low, no advantage of using more than one neural network of the ensemble to learn new data is taken and the test error is high. When all the neural networks are used to learn new data, some of them can indeed adapt better than the others, but a higher catastrophic forgetting is obtained. In this way, it is important to find a trade-off between overcoming catastrophic forgetting and using an entire ensemble to learn new data. The NCL results are comparable with other approaches which were specifically designed to incremental learning. Thus, the study presented in this work reveals encouraging results with negative correlation in incremental learning, showing that NCL is a promising approach to incremental learning.