A novel measure for evaluating classifiers

  • Authors:
  • Jin-Mao Wei;Xiao-Jie Yuan;Qing-Hua Hu;Shu-Qin Wang

  • Affiliations:
  • Department of Computer Science, Nankai University, Tianjin 300071, China;Department of Computer Science, Nankai University, Tianjin 300071, China;Harbin Institute of Technology, Harbin, China;College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2010

Quantified Score

Hi-index 12.05

Visualization

Abstract

Evaluating classifier performances is a crucial problem in pattern recognition and machine learning. In this paper, we propose a new measure, i.e. confusion entropy, for evaluating classifiers. For each class cl"i of an (N+1)-class problem, the misclassification information involves both the information of how the samples with true class label cl"i have been misclassified to the other N classes and the information of how the samples of the other N classes have been misclassified to class cl"i. The proposed measure exploits the class distribution information of such misclassifications of all classes. Both theoretical analysis and statistical experiments show the proposed measure is more precise than accuracy and RCI. Experimental results on some benchmark data sets further confirm the theoretical analysis and statistical results and show that the new measure is feasible for evaluating classifier performances.