An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
Ranking with multiple hyperplanes
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
FRank: a ranking method with fidelity loss
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Query dependent ranking using K-nearest neighbor
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Ranking from pairs and triplets: information quality, evaluation methods and query complexity
Proceedings of the fourth ACM international conference on Web search and data mining
Hi-index | 0.00 |
The LETOR website contains three information retrieval datasets used as a benchmark for testing machine learning ideas for ranking. Participating algorithms are measured using standard IR ranking measures (NDCG, precision, MAP). Similarly to other participating algorithms, we train a linear classifier. In contrast, we define an additional free benchmark variable for each query. This allows expressing the fact that results for different queries are incomparable for the purpose of determining relevance. The results are slightly better yet significantly simpler than the reported participating algorithms.