A principled evaluation of ensembles of learning machines for software effort estimation
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
Neural network ensembles to determine growth multi-classes in predictive microbiology
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
Exploiting unlabeled data to enhance ensemble diversity
Data Mining and Knowledge Discovery
Proceedings of the 9th International Conference on Predictive Models in Software Engineering
Software effort estimation as a multiobjective learning problem
ACM Transactions on Software Engineering and Methodology (TOSEM) - Testing, debugging, and error handling, formal methods, lifecycle concerns, evolution and maintenance
Neural network modeling of vector multivariable functions in ill-posed approximation problems
Journal of Computer and Systems Sciences International
Fast decorrelated neural network ensembles with random weights
Information Sciences: an International Journal
Hi-index | 0.00 |
Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error (MSE) together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when ?? = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overfitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter ?? in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set.