Robust Classification for Imprecise Environments
Machine Learning
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Tree Induction for Probability-Based Ranking
Machine Learning
Optimising area under the ROC curve using gradient descent
ICML '04 Proceedings of the twenty-first international conference on Machine learning
An empirical comparison of supervised learning algorithms
ICML '06 Proceedings of the 23rd international conference on Machine learning
Linear model combining by optimizing the Area under the ROC curve
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 04
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
AUC: a statistically consistent and more discriminating measure than accuracy
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
A Critical Analysis of Variants of the AUC
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Variance analysis in software fault prediction models
ISSRE'09 Proceedings of the 20th IEEE international conference on software reliability engineering
Computational Statistics & Data Analysis
A family of measures for best top-n class-selective decision rules
Pattern Recognition
Smooth receiver operating characteristics (smROC) curves
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
Evolving neural networks with maximum AUC for imbalanced data classification
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part I
Efficacious end user measures part 1: relative class size and end user problem domains
Advances in Artificial Intelligence - Special issue on Artificial Intelligence Applications in Biomedicine
ROC analysis of classifiers in machine learning: A survey
Intelligent Data Analysis
Hi-index | 0.00 |
The area under the ROC curve, or AUC, has been widely used to assess the ranking performance of binary scoring classifiers. Given a sample, the metric considers the ordering of positive and negative instances, i.e., the sign of the corresponding score differences. From a model evaluation and selection point of view, it may appear unreasonable to ignore the absolute value of these differences. For this reason, several variants of the AUC metric that take score differences into account have recently been proposed. In this paper, we present a unified framework for these metrics and provide a formal analysis. We conjecture that, despite their intuitive appeal, actually none of the variants is effective, at least with regard to model evaluation and selection. An extensive empirical analysis corroborates this conjecture. Our findings also shed light on recent research dealing with the construction of AUC-optimizing classifiers.