A Method of Combining Multiple Experts for the Recognition of Unconstrained Handwritten Numerals
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Random projection in dimensionality reduction: applications to image and text data
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Fast Time Sequence Indexing for Arbitrary Lp Norms
VLDB '00 Proceedings of the 26th International Conference on Very Large Data Bases
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
An Experimental Comparison of Fixed and Trained Fusion Rules for Crisp Classifier Outputs
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Multiclassifier Systems: Back to the Future
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Learning to Recognize Time Series: Combining ARMA models with memory-based learning
CIRA '97 Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation
Local feature extraction and its applications using a library of bases
Local feature extraction and its applications using a library of bases
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Classification by evolutionary ensembles
Pattern Recognition
Using diversity of errors for selecting members of a committee classifier
Pattern Recognition
Adaptive fusion and co-operative training for classifier ensembles
Pattern Recognition
Classifier combination based on confidence transformation
Pattern Recognition
Data dependence in combining classifiers
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Switching between selection and fusion in combining classifiers: anexperiment
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A neural network classifier based on Dempster-Shafer theory
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Non-uniform layered clustering for ensemble classifier generation and optimality
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Dynamic classifier ensemble model for customer classification with imbalanced class distribution
Expert Systems with Applications: An International Journal
Making Diversity Enhancement Based on Multiple Classifier System by Weight Tuning
Neural Processing Letters
A resilient voting scheme for improving secondary structure prediction
MIWAI'11 Proceedings of the 5th international conference on Multi-Disciplinary Trends in Artificial Intelligence
Predicting shellfish farm closures with class balancing methods
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Effect of ensemble classifier composition on offline cursive character recognition
Information Processing and Management: an International Journal
Hi-index | 0.01 |
In this paper, a generalized adaptive ensemble generation and aggregation (GAEGA) method for the design of multiple classifier systems (MCSs) is proposed. GAEGA adopts an ''over-generation and selection'' strategy to achieve a good bias-variance tradeoff. In the training phase, different ensembles of classifiers are adaptively generated by fitting the validation data globally with different degrees. The test data are then classified by each of the generated ensembles. The final decision is made by taking into consideration both the ability of each ensemble to fit the validation data locally and reducing the risk of overfitting. In this paper, the performance of GAEGA is assessed experimentally in comparison with other multiple classifier aggregation methods on 16 data sets. The experimental results demonstrate that GAEGA significantly outperforms the other methods in terms of average accuracy, ranging from 2.6% to 17.6%.