Machine Learning - Special issue on learning with probabilistic representations
Technical Note: Naive Bayes for Regression
Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A Bayesian network classifier that combines a finite mixture model and a naïve bayes model
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Comparing Naive Bayes, Decision Trees, and SVM with AUC and Accuracy
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
A critical review of multi-objective optimization in data mining: a position paper
ACM SIGKDD Explorations Newsletter
Using AUC and Accuracy in Evaluating Learning Algorithms
IEEE Transactions on Knowledge and Data Engineering
AUC: a statistically consistent and more discriminating measure than accuracy
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Combining SVM classifiers using genetic fuzzy systems based on AUC for gene expression data analysis
ISBRA'07 Proceedings of the 3rd international conference on Bioinformatics research and applications
AUC: a better measure than accuracy in comparing learning algorithms
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
PKDD'05 Proceedings of the 9th European conference on Principles and Practice of Knowledge Discovery in Databases
Improving Tree augmented Naive Bayes for class probability estimation
Knowledge-Based Systems
Learning Bayesian network classifiers by risk minimization
International Journal of Approximate Reasoning
Learning tree augmented naive bayes for ranking
DASFAA'05 Proceedings of the 10th international conference on Database Systems for Advanced Applications
Hi-index | 0.00 |
In most data mining applications, accurate ranking and probability estimation are essential. However, many traditional classifiers aim at a high classification accuracy (or low error rate) only, even though they also produce probability estimates. Does high predictive accuracy imply a better ranking and probability estimation? Is there any better evaluation method for those classifiers than the classification accuracy, for the purpose of data mining applications? The answer is the area under the ROC (Receiver Operating Characteristics) curve, or simply AUC. We show that AUC provides a more discriminating evaluation for the ranking and probability estimation than the accuracy does. Further, we show that classifiers constructed to maximise the AUC score produce not only higher AUC values, but also higher classification accuracies. Our results are based on experimental comparison between error-based and AUC-based learning algorithms for TAN (Tree-Augmented Naive Bayes).