Modern Information Retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
The TREC question answering track
Natural Language Engineering
Unifying collaborative and content-based filtering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Error limiting reductions between classification tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
On rank-based effectiveness measures and optimization
Information Retrieval
Magnitude-preserving ranking algorithms
Proceedings of the 24th international conference on Machine learning
Rank-biased precision for measurement of retrieval effectiveness
ACM Transactions on Information Systems (TOIS)
An efficient algorithm for learning to rank from preference graphs
Machine Learning
Robust sparse rank learning for non-smooth ranking measures
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
The P-Norm Push: A Simple Convex Ranking Algorithm that Concentrates at the Top of the List
The Journal of Machine Learning Research
Robust reductions from ranking to classification
COLT'07 Proceedings of the 20th annual conference on Learning theory
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Empirical analysis of predictive algorithms for collaborative filtering
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Sensitive error correcting output codes
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Statistical Analysis of Bayes Optimal Subset Ranking
IEEE Transactions on Information Theory
Hi-index | 5.23 |
A widespread idea to attack the ranking problem is by reducing it into a set of binary preferences and applying well studied classification methods. In particular, we consider this reduction for generic subset ranking, which is based on minimization of position-sensitive loss functions. The basic question addressed in this paper relates to whether an accurate classifier would transfer directly into a good ranker. We propose a consistent reduction framework guaranteeing that the minimal regret of zero for subset ranking is achievable by learning binary preferences assigned with importance weights. This fact allows us to further develop a novel upper bound on the subset ranking regret in terms of binary regrets. We show that their ratio can be at most 2 times the maximal deviation of discounts between adjacent positions. We also present a refined version of this bound when only the quality over the top rank positions is of concern. These bounds provide theoretical support on the use of the resulting binary classifiers for solving the subset ranking problem.