Original Contribution: Stacked generalization
Neural Networks
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Optimal Linear Combination of Neural Networks for Improving Classification Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Features-based decision aggregation in modular neural network classifiers
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Decision Fusion
On combining classifiers using sum and product rules
Pattern Recognition Letters
Modular Neural Network Classifiers: A Comparative Study
Journal of Intelligent and Robotic Systems
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Data dependence in combining classifiers
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Classifier ensemble selection using hybrid genetic algorithms
Pattern Recognition Letters
Data dependency in multiple classifier systems
Pattern Recognition
Computational Statistics & Data Analysis
Multiple classifier fusion using k-nearest localized templates
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Multiple classifier system for urban area's extraction from high resolution remote sensing imagery
ICIAR'11 Proceedings of the 8th international conference on Image analysis and recognition - Volume Part II
On-line multi-stage sorting algorithm for agriculture products
Pattern Recognition
Using an ensemble system to improve concept extraction from clinical records
Journal of Biomedical Informatics
Hi-index | 0.01 |
In this paper, architectures and methods of decision aggregation in classifier ensembles are investigated. Typically, ensembles are designed in such a way that each classifier is trained independently and the decision fusion is performed as a post-process module. In this study, however, we are interested in making the fusion a more adaptive process. We first propose a new architecture that utilizes the features of a problem to guide the decision fusion process. By using both the features and classifiers outputs, the recognition strengths and weaknesses of the different classifiers are identified. This information is used to improve overall generalization capability of the system. Furthermore, we propose a co-operative training algorithm that allows the final classification to determine whether further training should be carried out on the components of the architecture. The performance of the proposed architecture is assessed by testing it on several benchmark problems. The new architecture shows improvement over existing aggregation techniques. Moreover, the proposed co-operative training algorithm provides a means to limit the users' intervention, and maintains a level of accuracy that is competitive to that of most other approaches.