Modern Information Retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Error limiting reductions between classification tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
SoftRank: optimizing non-smooth rank metrics
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Robust reductions from ranking to classification
Machine Learning
Learning to rank with SoftRank and Gaussian processes
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Structured learning for non-smooth ranking losses
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Sensitive error correcting output codes
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Reducing position-sensitive subset ranking to classification
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
Robust visual reranking via sparsity and ranking constraints
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Generic subset ranking using binary classifiers
Theoretical Computer Science
Efficient gradient descent algorithm for sparse models with application in learning-to-rank
Knowledge-Based Systems
Hi-index | 0.00 |
Recently increasing attention has been focused on directly optimizing ranking measures and inducing sparsity in learning models. However, few attempts have been made to relate them together in approaching the problem of learning to rank. In this paper, we consider the sparse algorithms to directly optimize the Normalized Discounted Cumulative Gain (NDCG) which is a widely-used ranking measure. We begin by establishing a reduction framework under which we reduce ranking, as measured by NDCG, to the importance weighted pairwise classification. Furthermore, we provide a sound theoretical guarantee for this reduction, bounding the realized NDCG regret in terms of a properly weighted pairwise classification regret, which implies that good performance can be robustly transferred from pairwise classification to ranking. Based on the converted pairwise loss function, it is conceivable to take into account sparsity in ranking models and to come up with a gradient possessing certain performance guarantee. For the sake of achieving sparsity, a novel algorithm named RSRank has also been devised, which performs L1 regularization using truncated gradient descent. Finally, experimental results on benchmark collection confirm the significant advantage of RSRank in comparison with several baseline methods.