The Strength of Weak Learnability
Machine Learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A competitive modular connectionist architecture
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Bias/variance analyses of mixtures-of-experts architectures
Neural Computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving Performance in Neural Networks Using a Boosting Algorithm
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Adaptive mixtures of local experts
Neural Computation
Boosting and other ensemble methods
Neural Computation
Simultaneous training of negatively correlated neural networks inan ensemble
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Randomness in generalization ability: a source to improve it
IEEE Transactions on Neural Networks
Artificial Intelligence Review
Neural network ensemble training by sequential interaction
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Hi-index | 0.00 |
This paper describes two methods on how to generate different neural networks in an ensemble. One is based on negative correlation learning. The other is based on cross-validation with negative correlation learning, i.e., bagging with negative correlation learning. In negative correlation learning, all individual networks are trained simultaneously on the same training set. In bagging with negative correlation learning, different individual networks are trained on the different sampled data set with replacement from the training set. The performance and correct response sets are compared between two learning methods. The purpose of this paper is to find how to design more effective neural network ensembles.