Learning Decision Trees Using the Area Under the ROC Curve
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Tree Induction for Probability-Based Ranking
Machine Learning
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
Ranking and scoring using empirical risk minimization
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
We consider the extension of standard decision tree methods to the bipartite rankingproblem. In ranking, the goal pursued is global: define an order on the whole input space in order to have positive instances on top with maximum probability. The most natural way of ordering all instances consists in projecting the input data xonto the real line using a real-valued scoring functionsand the accuracy of the ordering induced by a candidate sis classically measured in terms of the AUC. In the paper, we discuss the design of tree-structured scoring functions obtained by maximizing the AUC criterion. In particular, the connection with recursive piecewise linear approximation of the optimal ROC curve both in the L1-sense and in the L茂戮驴-sense is discussed.