A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning
Machine Learning
Tree Induction for Probability-Based Ranking
Machine Learning
AUC: a statistically consistent and more discriminating measure than accuracy
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
Boosting is a general method of combining a set of classifiers in making final prediction. It is shown to be an effective approach to improve the predictive accuracy of a learning algorithm, but its impact on the ranking performance is unknown. This paper introduces the boosting algorithm AUCBoost, which is a generic algorithm to improve the ranking performance of learning algorithms. Unlike AdaBoost, AUCBoost uses the AUC, not the accuracy, of a classifier to calculate the weight of each training example for building next classifier. To simplify the computation of AUC of weighted instances in AUCBoost, we extend the standard formula for calculating AUC to be a weighted AUC formula (WAUC in short). This extension frees boosting from the resampling process and saves much computation time in the training process. Our experiment results show that the new boosting algorithm AUCBoost does improve ranking performance of AdaBoost when the base learning algorithm is the improved ranking favored decision tree C4.4 or naïve Bayes.