Explicitly representing expected cost: an alternative to ROC representation
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Principles of data mining
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Journal of Artificial Intelligence Research
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Guide to Intelligent Data Analysis: How to Intelligently Make Sense of Real Data
Guide to Intelligent Data Analysis: How to Intelligently Make Sense of Real Data
Hi-index | 0.00 |
Performance evaluation of classifiers is a crucial step for selecting the best classifier or the best set of parameters for a classifier. The misclassification rate of a classifier is often too simple because it does not take into account that misclassification for different classes might have more or less serious consequences. On the other hand, it is often difficult to specify exactly the consequences or costs of misclassifications. ROC and AUC analysis try to overcome these problems, but have their own disadvantages and even inconsistencies. We propose a visualisation technique for classifier performance evaluation and comparison that avoids the problems of ROC and AUC analysis.