IntervalRank: isotonic regression with listwise and pairwise constraints
Proceedings of the third ACM international conference on Web search and data mining
Margin-based Ranking and an Equivalence between AdaBoost and RankBoost
The Journal of Machine Learning Research
Reducing position-sensitive subset ranking to classification
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
A robust ranking methodology based on diverse calibration of AdaBoost
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Proceedings of the fifth ACM international conference on Web search and data mining
Maximum margin ranking algorithms for information retrieval
ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
Generic subset ranking using binary classifiers
Theoretical Computer Science
Full length article: The convergence rate of a regularized ranking algorithm
Journal of Approximation Theory
Information Sciences: an International Journal
Active evaluation of ranking functions based on graded relevance
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II
Active evaluation of ranking functions based on graded relevance
Machine Learning
Uniform convergence, stability and learnability for ranking problems
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Machine Learning
Learning to Rank with Extreme Learning Machine
Neural Processing Letters
Hi-index | 754.84 |
The ranking problem has become increasingly important in modern applications of statistical methods in automated decision making systems. In particular, we consider a formulation of the statistical ranking problem which we call subset ranking, and focus on the discounted cumulated gain (DCG) criterion that measures the quality of items near the top of the rank-list. Similar to error minimization for binary classification, direct optimization of natural ranking criteria such as DCG leads to a nonconvex optimization problems that can be NP-hard. Therefore, a computationally more tractable approach is needed. We present bounds that relate the approximate optimization of DCG to the approximate minimization of certain regression errors. These bounds justify the use of convex learning formulations for solving the subset ranking problem. The resulting estimation methods are not conventional, in that we focus on the estimation quality in the top-portion of the rank-list. We further investigate the asymptotic statistical behavior of these formulations. Under appropriate conditions, the consistency of the estimation schemes with respect to the DCG metric can be derived.