The Strength of Weak Learnability
Machine Learning
Machine Learning
Optimal linear combinations of neural networks
Neural Networks
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improved CBP Neural Network Model with Applications in Time Series Prediction
Neural Processing Letters
Ensemble Pruning Via Semi-definite Programming
The Journal of Machine Learning Research
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
The build of a dynamic classifier selection ICBP system and its application to pattern recognition
Neural Computing and Applications
Evolutionary ensembles with negative correlation learning
IEEE Transactions on Evolutionary Computation
Simultaneous training of negatively correlated neural networks inan ensemble
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Lung cancer cell identification based on artificial neural network ensembles
Artificial Intelligence in Medicine
Stability problems with artificial neural networks and the ensemble solution
Artificial Intelligence in Medicine
Circular backpropagation networks for classification
IEEE Transactions on Neural Networks
Circular backpropagation networks embed vector quantization
IEEE Transactions on Neural Networks
A constructive algorithm for training cooperative neural network ensembles
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
To optimize the construct of an ensemble of many relatively small and simple classifiers may be more realistic than to optimize the design of an individual large and complex classifier. Problems of local minima and slow convergence may be mitigated within an ensemble system where the decisions obtained from locally optimal components classifiers are integrated together. However, it is very difficult to design the structure of individual neural networks (NNs) in an ensemble and the architecture of a whole ensemble. In n-Bits Binary Coding ICBP Ensemble System (nBBC-ICBP-ES), the crucial parameter that is required to be set a priori is an appropriate number of hidden nodes of the corresponding improved circular back-propagation (ICBP) root model. Thereby, both the number of individual ICBPs and the architecture of each ICBP component in an nBBC-ICBP-ES can be decided straightly. nBBC-ICBP-ES is computationally more efficient, with relatively fewer user specified parameters, and does not need any manual division to the training data set for the purpose of its construction. It is rather easy to be understood and implemented, while inheriting the benefits of ICBP root model in a natural manner. Simulation and t-test results on four large-scale benchmark classification data sets demonstrated that, in most cases, nBBC-ICBP-ES significantly improved the classification and generalization performances of the two typically large single ICBPs, Same-2^N^"^h-ICBP-ES (i.e., an ensemble system with the same 2^N^"^h ICBP components), conventional Bagging and AdaBoost ensemble. We conclude that, for NN applications in pattern recognition, assembling many small NNs might be better than just utilizing an individual large one, and further, assembling many heterogeneous small NNs might be better than assembling many homogeneous ones. The proposed nBBC-ICBP-ES is simple but efficient and effective, and also potentially significant for the applications of NN ensemble.