IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
FRank: a ranking method with fidelity loss
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
SoftRank: optimizing non-smooth rank metrics
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
On statistical analysis and optimization of information retrieval effectiveness metrics
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
LETOR: A benchmark collection for research on learning to rank for information retrieval
Information Retrieval
List-wise learning to rank with matrix factorization for collaborative filtering
Proceedings of the fourth ACM conference on Recommender systems
BagBoo: a scalable hybrid bagging-the-boosting model
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Learning to re-rank web search results with multiple pairwise features
Proceedings of the fourth ACM international conference on Web search and data mining
Learning to rank with multiple objective functions
Proceedings of the 20th international conference on World wide web
A stochastic learning-to-rank algorithm and its application to contextual advertising
Proceedings of the 20th international conference on World wide web
Bagging gradient-boosted trees for high precision, low variance ranking models
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Smoothing NDCG metrics using tied scores
Proceedings of the 20th ACM international conference on Information and knowledge management
A noise-tolerant graphical model for ranking
Information Processing and Management: an International Journal
CTR prediction for contextual advertising: learning-to-rank approach
Proceedings of the Seventh International Workshop on Data Mining for Online Advertising
CRF framework for supervised preference aggregation
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
CoBaFi: collaborative bayesian filtering
Proceedings of the 23rd international conference on World wide web
Improving ranking performance with cost-sensitive ordinal classification via regression
Information Retrieval
Hi-index | 0.00 |
Ranking a set of retrieved documents according to their relevance to a query is a popular problem in information retrieval. Methods that learn ranking functions are difficult to optimize, as ranking performance is typically judged by metrics that are not smooth. In this paper we propose a new listwise approach to learning to rank. Our method creates a conditional probability distribution over rankings assigned to documents for a given query, which permits gradient ascent optimization of the expected value of some performance measure. The rank probabilities take the form of a Boltzmann distribution, based on an energy function that depends on a scoring function composed of individual and pairwise potentials. Including pairwise potentials is a novel contribution, allowing the model to encode regularities in the relative scores of documents; existing models assign scores at test time based only on individual documents, with no pairwise constraints between documents. Experimental results on the LETOR3.0 data set show that our method out-performs existing learning approaches to ranking.