A comparison study of cost-sensitive classifier evaluations

  • Authors:
  • Bing Zhou;Qingzhong Liu

  • Affiliations:
  • Department of Computer Science, Sam Houston State University, Huntsville, Texas;Department of Computer Science, Sam Houston State University, Huntsville, Texas

  • Venue:
  • BI'12 Proceedings of the 2012 international conference on Brain Informatics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Performance evaluation plays an important role in the rule induction and classification process. Classic evaluation measures have been extensively studied in the past. In recent years, cost-sensitive classification has received much attention. In a typical classification task, all types of classification errors are treated equally. In many practical cases, not all errors are equal. Therefore, it is critical to build a cost-sensitive classifier to minimize the expected cost. This also brings us to another important issue, namely, cost-sensitive classifier evaluations. The main objective is to investigate different aspects of this problem. We review five existing cost-sensitive evaluation measures and compare their similarities and differences. We find that the cost-sensitive measures provide consistent evaluation results comparing to classic evaluation measures in most cases. However, when applying different cost values to the evaluation, the differences between the performances of each algorithm change. It is reasonable to conclude that the evaluation results could change dramatically when certain cost values applied. Moreover, by using cost curves to visualize the classification results, performance and performance differences of different classifiers can be easily seen.