Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning structured prediction models: a large margin approach
ICML '05 Proceedings of the 22nd international conference on Machine learning
On rank-based effectiveness measures and optimization
Information Retrieval
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
A support vector method for optimizing average precision
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Query-level loss functions for information retrieval
Information Processing and Management: an International Journal
A machine learning approach for improved BM25 retrieval
Proceedings of the 18th ACM conference on Information and knowledge management
Classification-enhanced ranking
Proceedings of the 19th international conference on World wide web
Proceedings of the 19th international conference on World wide web
Gradient descent optimization of smoothed information retrieval metrics
Information Retrieval
Adapting boosting for information retrieval measures
Information Retrieval
How good is a span of terms?: exploiting proximity to improve web retrieval
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Extending average precision to graded relevance judgments
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the fourth ACM international conference on Web search and data mining
LambdaMerge: merging the results of query reformulations
Proceedings of the fourth ACM international conference on Web search and data mining
Learning to rank with multiple objective functions
Proceedings of the 20th international conference on World wide web
Semi-supervised learning to rank with preference regularization
Proceedings of the 20th ACM international conference on Information and knowledge management
Proceedings of the fifth ACM international conference on Web search and data mining
Proceedings of the 21st international conference on World Wide Web
Robust ranking models via risk-sensitive optimization
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Two-Stage learning to rank for information retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Diffusion-aware personalized social update recommendation
Proceedings of the 7th ACM conference on Recommender systems
The whens and hows of learning to rank for web search
Information Retrieval
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.00 |
A machine learning approach to learning to rank trains a model to optimize a target evaluation measure with repect to training data. Currently, existing information retrieval measures are impossible to optimize directly except for models with a very small number of parameters. The IR community thus faces a major challenge: how to optimize IR measures of interest directly. In this paper, we present a solution. Specifically, we show that LambdaRank, which smoothly approximates the gradient of the target measure, can be adapted to work with four popular IR target evaluation measures using the same underlying gradient construction. It is likely, therefore, that this construction is extendable to other evaluation measures. We empirically show that LambdaRank finds a locally optimal solution for mean NDCG@10, mean NDCG, MAP and MRR with a 99% confidence rate. We also show that the amount of effective training data varies with IR measure and that with a sufficiently large training set size, matching the training optimization measure to the target evaluation measure yields the best accuracy.