Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
SoftRank: optimizing non-smooth rank metrics
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
BoltzRank: learning to maximize expected ranking gain
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Computing information retrieval performance measures efficiently in the presence of tied scores
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Gradient descent optimization of smoothed information retrieval metrics
Information Retrieval
Hi-index | 0.00 |
One of promising directions in research on learning to rank concerns the problem of appropriate choice of the objective function to maximize by means of machine learning algorithms. We describe a novel technique of smoothing an arbitrary ranking metric and demonstrate how to utilize it to maximize the retrieval quality in terms of the $NDCG$ metric. The idea behind our listwise ranking model called TieRank is artificial probabilistic tying of predicted relevance scores at each iteration of learning process, which defines a distribution on the set of all permutations of retrieved documents. Such distribution provides a desired smoothed version of the target retrieval quality metric. This smooth function is possible to maximize using a gradient descent method. Experiments on LETOR collections show that TieRank outperforms most of the existing learning to rank algorithms.