Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Transforming classifier scores into accurate multiclass probability estimates
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
FARMER: finding interesting rule groups in microarray datasets
SIGMOD '04 Proceedings of the 2004 ACM SIGMOD international conference on Management of data
Properties and benefits of calibrated classifiers
PKDD '04 Proceedings of the 8th European Conference on Principles and Practice of Knowledge Discovery in Databases
Predicting good probabilities with supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Multi-evidence, multi-criteria, lazy associative document classification
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Lazy Associative Classification
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
A support vector method for optimizing average precision
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
A genetic algorithm calibration method based on convergence due to genetic drift
Information Sciences: an International Journal
Learning to rank at query-time using association rules
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Calibrated lazy associative classification
SBBD '08 Proceedings of the 23rd Brazilian symposium on Databases
Information Sciences: an International Journal
Information Sciences: an International Journal
Classification based on association rules: A lattice-based approach
Expert Systems with Applications: An International Journal
A tool for generating synthetic authorship records for evaluating author name disambiguation methods
Information Sciences: an International Journal
CAR-Miner: An efficient algorithm for mining class-association rules
Expert Systems with Applications: An International Journal
Interestingness measures for classification based on association rules
ICCCI'12 Proceedings of the 4th international conference on Computational Collective Intelligence: technologies and applications - Volume Part II
Hi-index | 0.07 |
Classification is a popular machine learning task. Given an example x and a class c, a classifier usually works by estimating the probability of x being member of c (i.e., membership probability). Well calibrated classifiers are those able to provide accurate estimates of class membership probabilities, that is, the estimated probability p@?(c|x) is close to p(c|p@?(c|x)), which is the true, (unknown) empirical probability of x being member of c given that the probability estimated by the classifier is p@?(c|x). Calibration is not a necessary property for producing accurate classifiers, and, thus, most of the research has focused on direct accuracy maximization strategies rather than on calibration. However, non-calibrated classifiers are problematic in applications where the reliability associated with a prediction must be taken into account. In these applications, a sensible use of the classifier must be based on the reliability of its predictions, and, thus, the classifier must be well calibrated. In this paper we show that lazy associative classifiers (LAC) are well calibrated using an MDL-based entropy minimization method. We investigate important applications where such characteristics (i.e., accuracy and calibration) are relevant, and we demonstrate empirically that LAC outperforms other classifiers, such as SVMs, Naive Bayes, and Decision Trees (even after these classifiers are calibrated). Additional highlights of LAC include the ability to incorporate reliable predictions for improving training, and the ability to refrain from doubtful predictions.