Comparison of Non-Parametric Methods for Assessing Classifier Performance in Terms of ROC Parameters

  • Authors:
  • Waleed A. Yousef;Robert F. Wagner;Murray H. Loew

  • Affiliations:
  • George Washington University;Center for Devices & Radiological Health, FDA;George Washington University

  • Venue:
  • AIPR '04 Proceedings of the 33rd Applied Imagery Pattern Recognition Workshop
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

The most common metric to assess a classifier's performance is the classification error rate, or the probability of misclassification (PMC). Receiver Operating Characteristic (ROC) analysis isa more general way to measure the performance. Some metrics that summarize the ROC curve are the two normal-deviate-axes parameters, i.e., a and b, and the Area Under the Curve (AUC). The parameters "a" and "b" represent the intercept and slope, respectively, for the ROC curve if plotted on normal-deviate-axes scale. AUC represents the average of the classifier TPF over FPF resulting from considering different threshold values. In the present work, we used Monte-Carlo simulations to compare different bootstrap-based estimators, e.g., leave-one-out, .632, and .632+ bootstraps, to estimate the AUC. The results show the comparable performance of the different estimators in terms of RMS, while the .632+ is the least biased.