On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Robust Classification for Imprecise Environments
Machine Learning
Information Retrieval
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A Study on the Effect of Class Distribution Using Cost-Sensitive Learning
DS '02 Proceedings of the 5th International Conference on Discovery Science
Reducing the classification cost of support vector classifiers through an ROC-based reject rule
Pattern Analysis & Applications
On optimal reject rules and ROC curves
Pattern Recognition Letters
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
A ROC-based reject rule for dichotomizers
Pattern Recognition Letters
Predicting good probabilities with supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Estimating the Posterior Probabilities Using the K-Nearest Neighbor Rule
Neural Computation
Classification of intrusion detection alerts using abstaining classifiers
Intelligent Data Analysis
Learning when training data are costly: the effect of class distribution on tree induction
Journal of Artificial Intelligence Research
AUC: a statistically consistent and more discriminating measure than accuracy
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
The foundations of cost-sensitive learning
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Repairing concavities in ROC curves
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Meta-conformity approach to reliable classification
Intelligent Data Analysis
k-version-space multi-class classification based on k-consistency tests
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Hi-index | 0.00 |
We address the problem of applying machine-learning classifiers in domains where incorrect classifications have severe consequences. In these domains we propose to apply classifiers only when their performance can be defined by the domain expert prior to classification. The classifiers so obtained are called reliable classifiers. In the article we present three main contributions. First, we establish the effect on an ROC curve when ambiguous instances are left unclassified. Second, we propose the ROC isometrics approach to tune and transform a classifier in such a way that it becomes reliable. Third, we provide an empirical evaluation of the approach. From our analysis and experimental evaluation we may conclude that the ROC isometrics approach is an effective and efficient approach to construct reliable classifiers. In addition, a discussion about related work clearly shows the benefits of the approach when compared with existing approaches that also have the option to leave ambiguous instances unclassified.