Instance-Based Learning Algorithms
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Boosted mixture of experts: an ensemble learning scheme
Neural Computation
Soft combination of neural classifiers: a comparative study
Pattern Recognition Letters
Rule Induction with CN2: Some Recent Improvements
EWSL '91 Proceedings of the European Working Session on Machine Learning
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
Probability Estimates for Multi-class Classification by Pairwise Coupling
The Journal of Machine Learning Research
Improving Multiclass Pattern Recognition by the Combination of Two Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
An incremental node embedding technique for error correcting output codes
Pattern Recognition
From dynamic classifier selection to dynamic ensemble selection
Pattern Recognition
From dynamic classifier selection to dynamic ensemble selection
Pattern Recognition
KEEL: a software tool to assess evolutionary algorithms for data mining problems
Soft Computing - A Fusion of Foundations, Methodologies and Applications - Special Issue on Evolutionary and Metaheuristics based Data Mining (EMBDM); Guest Editors: José A. Gámez, María J. del Jesús, José M. Puerta
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
A review on the combination of binary classifiers in multiclass problems
Artificial Intelligence Review
Artificial Intelligence Review
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
A probabilistic model of classifier competence for dynamic ensemble selection
Pattern Recognition
A novel SVM+NDA model for classification with an application to face recognition
Pattern Recognition
A unifying view on dataset shift in classification
Pattern Recognition
An ensemble of filters and classifiers for microarray data classification
Pattern Recognition
A novel hybrid CNN-SVM classifier for recognizing handwritten digits
Pattern Recognition
Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study
IEEE Transactions on Pattern Analysis and Machine Intelligence
Switching between selection and fusion in combining classifiers: anexperiment
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Support vector learning for fuzzy rule-based classification systems
IEEE Transactions on Fuzzy Systems
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Nesting One-Against-One Algorithm Based on SVMs for Pattern Classification
IEEE Transactions on Neural Networks
Information Sciences: an International Journal
Hi-index | 0.01 |
The One-vs-One strategy is one of the most commonly used decomposition technique to overcome multi-class classification problems; this way, multi-class problems are divided into easier-to-solve binary classification problems considering pairs of classes from the original problem, which are then learned by independent base classifiers. The way of performing the division produces the so-called non-competence. This problem occurs whenever an instance is classified, since it is submitted to all the base classifiers although the outputs of some of them are not meaningful (they were not trained using the instances from the class of the instance to be classified). This issue may lead to erroneous classifications, because in spite of their incompetence, all classifiers' decisions are usually considered in the aggregation phase. In this paper, we propose a dynamic classifier selection strategy for One-vs-One scheme that tries to avoid the non-competent classifiers when their output is probably not of interest. We consider the neighborhood of each instance to decide whether a classifier may be competent or not. In order to verify the validity of the proposed method, we will carry out a thorough experimental study considering different base classifiers and comparing our proposal with the best performer state-of-the-art aggregation within each base classifier from the five Machine Learning paradigms selected. The findings drawn from the empirical analysis are supported by the appropriate statistical analysis.