Learning and making decisions when costs and probabilities are both unknown
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Transforming classifier scores into accurate multiclass probability estimates
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Predicting good probabilities with supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
An empirical comparison of supervised learning algorithms
ICML '06 Proceedings of the 23rd international conference on Machine learning
An introduction to ROC analysis
Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition
Predicting clicks: estimating the click-through rate for new ads
Proceedings of the 16th international conference on World Wide Web
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Machine Learning
An empirical evaluation of supervised learning in high dimensions
Proceedings of the 25th international conference on Machine learning
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
Introduction to Algorithms, Third Edition
Introduction to Algorithms, Third Edition
Foundations and Trends® in Machine Learning
Ensemble Methods: Foundations and Algorithms
Ensemble Methods: Foundations and Algorithms
Hi-index | 0.00 |
In classification problems, isotonic regression has been commonly used to map the prediction scores to posterior class probabilities. However, isotonic regression may suffer from overfitting, and the learned mapping is often discontinuous. Besides, current efforts mainly focus on the calibration of a single classifier. As different classifiers have different strengths, a combination of them can lead to better performance. In this paper, we propose a novel probability calibration approach for such an ensemble of classifiers. We first construct isotonic constraints on the desired probabilities based on soft voting of the classifiers. Manifold information is also incorporated to combat overfitting and ensure function smoothness. Computationally, the extended isotonic regression model can be learned efficiently by a novel optimization algorithm based on the alternating direction method of multipliers (ADMM). Experiments on a number of real-world data sets demonstrate that the proposed approach consistently outperforms independent classifiers and other combinations of the classifiers' probabilities in terms of the Brier score and AUC.