A study of smoothing methods for language models applied to Ad Hoc information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
A support vector method for optimizing average precision
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
A regression framework for learning ranking functions using relative relevance judgments
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
FRank: a ranking method with fidelity loss
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
SoftRank: optimizing non-smooth rank metrics
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Query-level loss functions for information retrieval
Information Processing and Management: an International Journal
Learning to rank relational objects and its application to web search
Proceedings of the 17th international conference on World Wide Web
Listwise approach to learning to rank: theory and algorithm
Proceedings of the 25th international conference on Machine learning
Directly optimizing evaluation measures in learning to rank
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Learning to rank with partially-labeled data
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Learning to rank with SoftRank and Gaussian processes
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Structured learning for non-smooth ranking losses
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Rank-biased precision for measurement of retrieval effectiveness
ACM Transactions on Information Systems (TOIS)
BoltzRank: learning to maximize expected ranking gain
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Directly optimizing evaluation measures in learning to rank based on the clonal selection algorithm
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Learning to rank social update streams
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Direct optimization of ranking measures for learning to rank models
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
Recently direct optimization of information retrieval (IR) measures has become a new trend in learning to rank. In this paper, we propose a general framework for direct optimization of IR measures, which enjoys several theoretical advantages. The general framework, which can be used to optimize most IR measures, addresses the task by approximating the IR measures and optimizing the approximated surrogate functions. Theoretical analysis shows that a high approximation accuracy can be achieved by the framework. We take average precision (AP) and normalized discounted cumulated gains (NDCG) as examples to demonstrate how to realize the proposed framework. Experiments on benchmark datasets show that the algorithms deduced from our framework are very effective when compared to existing methods. The empirical results also agree well with the theoretical results obtained in the paper.