SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Explicitly representing expected cost: an alternative to ROC representation
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Cost-sensitive classifier evaluation
UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
Training Cost-Sensitive Neural Networks with Methods Addressing the Class Imbalance Problem
IEEE Transactions on Knowledge and Data Engineering
A hierarchical model for test-cost-sensitive decision systems
Information Sciences: an International Journal
Learning Optimal Parameters in Decision-Theoretic Rough Sets
RSKT '09 Proceedings of the 4th International Conference on Rough Sets and Knowledge Technology
Journal of Artificial Intelligence Research
Three-way decisions with probabilistic rough sets
Information Sciences: an International Journal
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
The superiority of three-way decisions in probabilistic rough set models
Information Sciences: an International Journal
Cost sensitive classification in data mining
ADMA'10 Proceedings of the 6th international conference on Advanced data mining and applications: Part I
A three-way decision approach to email spam filtering
AI'10 Proceedings of the 23rd Canadian conference on Advances in Artificial Intelligence
Multi-class decision-theoretic rough sets
International Journal of Approximate Reasoning
Feature selection with test cost constraint
International Journal of Approximate Reasoning
Cost-sensitive three-way email spam filtering
Journal of Intelligent Information Systems
Hi-index | 0.00 |
Performance evaluation plays an important role in the rule induction and classification process. Classic evaluation measures have been extensively studied in the past. In recent years, cost-sensitive classification has received much attention. In a typical classification task, all types of classification errors are treated equally. In many practical cases, not all errors are equal. Therefore, it is critical to build a cost-sensitive classifier to minimize the expected cost. This also brings us to another important issue, namely, cost-sensitive classifier evaluations. The main objective is to investigate different aspects of this problem. We review five existing cost-sensitive evaluation measures and compare their similarities and differences. We find that the cost-sensitive measures provide consistent evaluation results comparing to classic evaluation measures in most cases. However, when applying different cost values to the evaluation, the differences between the performances of each algorithm change. It is reasonable to conclude that the evaluation results could change dramatically when certain cost values applied. Moreover, by using cost curves to visualize the classification results, performance and performance differences of different classifiers can be easily seen.