C4.5: programs for machine learning
C4.5: programs for machine learning
Robust classification systems for imprecise environments
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Explicitly representing expected cost: an alternative to ROC representation
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Information Retrieval
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Exploiting the Cost (In)sensitivity of Decision Tree Splitting Criteria
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Estimating the utility value of individual credit card delinquents
Expert Systems with Applications: An International Journal
Decision Support Systems
Towards the Generic Framework for Utility Considerations in Data Mining Research
Proceedings of the 2010 conference on Data Mining for Business Applications
A comparison study of cost-sensitive classifier evaluations
BI'12 Proceedings of the 2012 international conference on Brain Informatics
Influence of class distribution on cost-sensitive learning: A case study of bankruptcy analysis
Intelligent Data Analysis
Hi-index | 0.00 |
Evaluating classifier performance in a cost-sensitive setting is straightforward if the operating conditions (misclassification costs and class distributions) are fixed and known. When this is not the case, evaluation requires a method of visualizing classifier performance across the full range of possible operating conditions. This paper reviews the classic technique for classifier performance visualization -- the ROC curve -- and argues that it is inadequate for the needs of researchers and practitioners in several important respects. It then shows that a different way of visualizing classifier performance -- the cost curve introduced by Drummond and Holte at KDD'2000 -- overcomes these deficiencies. A software package supporting all the cost curve analysis described in this paper is available by contacting the first author.