Original Contribution: Stacked generalization
Neural Networks
Combining the results of several neural network classifiers
Neural Networks
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Machine Learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Theoretical Study on Six Classifier Fusion Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Ensembling neural networks: many could be better than all
Artificial Intelligence
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Ensemble Pruning Via Semi-definite Programming
The Journal of Machine Learning Research
Potential Functions in Mathematical Pattern Recognition
IEEE Transactions on Computers
From dynamic classifier selection to dynamic ensemble selection
Pattern Recognition
Adaptive mixtures of local experts
Neural Computation
An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Knowledge discovery on RFM model using Bernoulli sequence
Expert Systems with Applications: An International Journal
On a New Measure of Classifier Competence Applied to the Design of Multiclassifier Systems
ICIAP '09 Proceedings of the 15th International Conference on Image Analysis and Processing
Selection-fusion approach for classification of datasets with missing values
Pattern Recognition
Information theoretic combination of pattern classifiers
Pattern Recognition
Learn++.MF: A random subspace approach for the missing feature problem
Pattern Recognition
A Measure of Competence Based on Randomized Reference Classifier for Dynamic Ensemble Selection
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
ISBMDA'06 Proceedings of the 7th international conference on Biological and Medical Data Analysis
AIASABEBI'11 Proceedings of the 11th WSEAS international conference on Applied informatics and communications, and Proceedings of the 4th WSEAS International conference on Biomedical electronics and biomedical informatics, and Proceedings of the international conference on Computational engineering in systems applications
Risk function estimation for subproblems in a hierarchical classifier
Pattern Recognition Letters
ITIB'12 Proceedings of the Third international conference on Information Technologies in Biomedicine
Classifier fusion with interval-valued weights
Pattern Recognition Letters
A survey of multiple classifier systems as hybrid systems
Information Fusion
Hi-index | 0.01 |
The concept of a classifier competence is fundamental to multiple classifier systems (MCSs). In this study, a method for calculating the classifier competence is developed using a probabilistic model. In the method, first a randomised reference classifier (RRC) whose class supports are realisations of the random variables with beta probability distributions is constructed. The parameters of the distributions are chosen in such a way that, for each feature vector in a validation set, the expected values of the class supports produced by the RRC and the class supports produced by a modelled classifier are equal. This allows for using the probability of correct classification of the RRC as the competence of the modelled classifier. The competences calculated for a validation set are then generalised to an entire feature space by constructing a competence function based on a potential function model or regression. Three systems based on a dynamic classifier selection and a dynamic ensemble selection (DES) were constructed using the method developed. The DES based system had statistically significant higher average rank than the ones of eight benchmark MCSs for 22 data sets and a heterogeneous ensemble. The results obtained indicate that the full vector of class supports should be used for evaluating the classifier competence as this potentially improves performance of MCSs.