Explicitly representing expected cost: an alternative to ROC representation
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Robust Classification for Imprecise Environments
Machine Learning
Machine Learning
Issues in Classifier Evaluation using Optimal Cost Curves
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Wrapper-based computation and evaluation of sampling methods for imbalanced datasets
UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
Training Cost-Sensitive Neural Networks with Methods Addressing the Class Imbalance Problem
IEEE Transactions on Knowledge and Data Engineering
The relationship between Precision-Recall and ROC curves
ICML '06 Proceedings of the 23rd international conference on Machine learning
ROC graphs with instance-varying costs
Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition
Learning to Use a Learned Model: A Two-Stage Approach to Classification
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Capturing heuristics and intelligent methods for improving micro-array data classification
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Spiral discovery of a separate prediction model from chronic hepatitis data
JSAI'03/JSAI04 Proceedings of the 2003 and 2004 international conference on New frontiers in artificial intelligence
Variable randomness in decision tree ensembles
PAKDD'06 Proceedings of the 10th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
Severe class imbalance: why better algorithms aren't the answer
ECML'05 Proceedings of the 16th European conference on Machine Learning
Hi-index | 0.00 |
The evaluation of classifier performance in a cost-sensitive setting is straightforward if the operating conditions (misclassification costs and class distributions) are fixed and known. When this is not the case, evaluation requires a method of visualizing classifier performance across the full range of possible operating conditions. This talk outlines the most important requirements for cost-sensitive classifier evaluation for machine learning and KDD researchers and practitioners, and introduces a recently developed technique for classifier performance visualization - the cost curve - that meets all these requirements.