Handwritten numerical recognition based on multiple algorithms
Pattern Recognition
Machine Learning
Robust Classification for Imprecise Environments
Machine Learning
Learning Decision Trees Using the Area Under the ROC Curve
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Ensembles of Learning Machines
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Tree Induction for Probability-Based Ranking
Machine Learning
Adaptive mixtures of local experts
Neural Computation
Application of majority voting to pattern recognition: an analysis of its behavior and performance
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Ensembles of biased classifiers
ICML '05 Proceedings of the 22nd international conference on Machine learning
Machine Learning
The ROC isometrics approach to construct reliable classifiers
Intelligent Data Analysis
Cartoon synthesis using constrained spreading activation network
Multimedia Tools and Applications
Information, Divergence and Risk for Binary Experiments
The Journal of Machine Learning Research
Reinventing machine learning with ROC analysis
IBERAMIA-SBIA'06 Proceedings of the 2nd international joint conference, and Proceedings of the 10th Ibero-American Conference on AI 18th Brazilian conference on Advances in Artificial Intelligence
ROC analysis of classifiers in machine learning: A survey
Intelligent Data Analysis
Hi-index | 0.00 |
In this paper we investigate methods to detect and repair concavities in ROC curves by manipulating model predictions. The basic idea is that, if a point or a set of points lies below the line spanned by two other points in ROC space, we can use this information to repair the concavity. This effectively builds a hybrid model combining the two better models with an inversion of the poorer models; in the case of ranking classifiers, it means that certain intervals of the scores are identified as unreliable and candidates for inversion. We report very encouraging results on 23 UCI data sets, particularly for naive Bayes where the use of two validation folds yielded significant improvements on more than half of them, with only one loss.