Estimating the uncertainty in the estimated mean area under the ROC curve of a classifier
Pattern Recognition Letters
Assessing Classifiers from Two Independent Data Sets Using ROC Analysis: A Nonparametric Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Techniques for evaluating fault prediction models
Empirical Software Engineering
A study of the effect of noise injection on the training of artificial neural networks
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Classifier variability: Accounting for training and testing
Pattern Recognition
Assessing classifiers in terms of the partial area under the ROC curve
Computational Statistics & Data Analysis
Computational Statistics & Data Analysis
Hi-index | 0.00 |
The most common metric to assess a classifier's performance is the classification error rate, or the probability of misclassification (PMC). Receiver Operating Characteristic (ROC) analysis isa more general way to measure the performance. Some metrics that summarize the ROC curve are the two normal-deviate-axes parameters, i.e., a and b, and the Area Under the Curve (AUC). The parameters "a" and "b" represent the intercept and slope, respectively, for the ROC curve if plotted on normal-deviate-axes scale. AUC represents the average of the classifier TPF over FPF resulting from considering different threshold values. In the present work, we used Monte-Carlo simulations to compare different bootstrap-based estimators, e.g., leave-one-out, .632, and .632+ bootstraps, to estimate the AUC. The results show the comparable performance of the different estimators in terms of RMS, while the .632+ is the least biased.