Machine Learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Detection, Estimation, and Modulation Theory: Radar-Sonar Signal Processing and Gaussian Signals in Noise
Learning Decision Trees Using the Area Under the ROC Curve
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Tree Induction for Probability-Based Ranking
Machine Learning
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
Learning user preferences for sets of objects
ICML '06 Proceedings of the 23rd international conference on Machine learning
The Journal of Machine Learning Research
Ranking and scoring using empirical risk minimization
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Kantorovich distances between rankings with applications to rank aggregation
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Ensembles of probability estimation trees for customer churn prediction
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part II
Clustering rankings in the fourier domain
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Efficient multifaceted screening of job applicants
Proceedings of the 16th International Conference on Extending Database Technology
Learning to question: leveraging user preferences for shopping advice
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
The Journal of Machine Learning Research
Hi-index | 754.85 |
This paper investigates how recursive partitioning methods can be adapted to the bipartite ranking problem. In ranking, the pursued goal is global: based on past data, define an order on the whole input space χ, so that positive instances take up the top ranks with maximum probability. The most natural way to order all instances consists of projecting the input data onto the real line through a real-valued scoring function s and use the natural order R. The accuracy of the ordering induced by a candidate s is classically measured in terms of the ROC curve or the AUC. Here we discuss the design of tree-structured scoring functions obtained by recursively maximizing the AUC criterion. The connection with recursive piecewise linear approximation of the optimal ROC curve both in the L1-sense and in the L∞-sense is highlighted. A novel tree-based algorithm for ranking, called TREERANK, is proposed. Consistency results and generalization bounds of functional nature are established for this ranking method, when considering either the L1 or L∞ distance. We also describe committee-based learning procedures using TREERANK as a "base ranker," in order to overcome obvious drawbacks of such a top-down partitioning technique. Simulation results on artificial data are also displayed.