On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Learning Reliable Classifiers From Small or Incomplete Data Sets: The Naive Credal Classifier 2
The Journal of Machine Learning Research
Random k-Labelsets: An Ensemble Method for Multilabel Classification
ECML '07 Proceedings of the 18th European conference on Machine Learning
Credal Model Averaging: An Extension of Bayesian Model Averaging to Imprecise Probabilities
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Proceedings of the 1st ACM SIGKDD Workshop on Knowledge Discovery from Uncertain Data
Upper entropy of credal sets. Applications to credal classification
International Journal of Approximate Reasoning
Learning Nondeterministic Classifiers
The Journal of Machine Learning Research
Credal ensembles of classifiers
Computational Statistics & Data Analysis
Hi-index | 0.00 |
Predictions made by imprecise-probability models are often indeterminate (that is, set-valued). Measuring the quality of an indeterminate prediction by a single number is important to fairly compare different models, but a principled approach to this problem is currently missing. In this paper we derive, from a set of assumptions, a metric to evaluate the predictions of credal classifiers. These are supervised learning models that issue set-valued predictions. The metric turns out to be made of an objective component, and another that is related to the decision-maker's degree of risk aversion to the variability of predictions. We discuss when the measure can be rendered independent of such a degree, and provide insights as to how the comparison of classifiers based on the new measure changes with the number of predictions to be made. Finally, we make extensive empirical tests of credal, as well as precise, classifiers by using the new metric. This shows the practical usefulness of the metric, while yielding a first insightful and extensive comparison of credal classifiers.