Multilayer feedforward networks are universal approximators
Neural Networks
The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Universal approximation using radial-basis-function networks
Neural Computation
Neural networks and the bias/variance dilemma
Neural Computation
Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Ensemble learning via negative correlation
Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Experiments with AdaBoost.RT, an improved boosting scheme for regression
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Evolutionary ensembles with negative correlation learning
IEEE Transactions on Evolutionary Computation
Making use of population information in evolutionary artificialneural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
The ensemble approach to neural-network learning and generalization
IEEE Transactions on Neural Networks
Regularized Negative Correlation Learning for Neural Network Ensembles
IEEE Transactions on Neural Networks
Stochastic choice of basis functions in adaptive function approximation and the functional-link net
IEEE Transactions on Neural Networks
Hi-index | 0.07 |
Negative correlation learning (NCL) aims to produce ensembles with sound generalization capability through controlling the disagreement among base learners' outputs. Such a learning scheme is usually implemented by using feed-forward neural networks with error back-propagation algorithms (BPNNs). However, it suffers from slow convergence, local minima problem and model uncertainties caused by the initial weights and the setting of learning parameters. To achieve a better solution, this paper employs the random vector functional link (RVFL) networks as base components, and incorporates with the NCL strategy for building neural network ensembles. The basis functions of the base models are generated randomly and the parameters of the RVFL networks can be determined by solving a linear equation system. An analytical solution is derived for these parameters, where a cost function defined for NCL and the well-known least squares method are used. To examine the merits of our proposed algorithm, a comparative study is carried out with nine benchmark datasets. Results indicate that our approach outperforms other ensembling techniques on the testing datasets in terms of both effectiveness and efficiency.