Combining the results of several neural network classifiers
Neural Networks
Tabu Search
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Fuzzy Models and Algorithms for Pattern Recognition and Image Processing
Fuzzy Models and Algorithms for Pattern Recognition and Image Processing
Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
The Knowledge Engineering Review
Designing classifier fusion systems by genetic algorithms
IEEE Transactions on Evolutionary Computation
Comparison of Genetic Algorithm and Sequential Search Methods for Classifier Subset Selection
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
Combining classifiers with particle swarms
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
Evaluation of diversity measures for binary classifier ensembles
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Using diversity in classifier set selection for arabic handwritten recognition
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
A generic classifier-ensemble approach for biomedical named entity recognition
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
In many pattern recognition tasks, an approach based on combining classifiers has shown a significant potential gain in comparison to the performance of an individual best classifier. This improvement turned out to be subject to a sufficient level of diversity exhibited among classifiers, which in general can be assumed as a selective property of classifier subsets. Given a large number of classifiers, an intelligent classifier selection process becomes a crucial issue of multiple classifier system design. In this paper, we have investigated three evolutionary optimization methods for the classifier selection task. Based on our previous studies of various diversity measures and their correlation with majority voting error we have adopted majority voting performance computed for the validation set directly as a fitness function guiding the search. To prevent from training data overfitting we extracted a population of best unique classifier combinations, and used them for second stage majority voting. In this work we intend to show empirically, that using efficient evolutionary-based selection leads to the results comparable to absolutely best, found exhaustively. Moreover, as we showed for selected datasets, introducing a second stage combining by majority voting has the potential for both, further improvement of the recognition rate and increase of the reliability of combined outputs.