Minimal test collections for retrieval evaluation
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
ECIR '09 Proceedings of the 31th European Conference on IR Research on Advances in Information Retrieval
Document selection methodologies for efficient and effective learning-to-rank
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Deep versus shallow judgments in learning to rank
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Active learning for ranking through expected loss optimization
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Hi-index | 0.00 |
Methods that reduce the amount of labeled data needed for training have focused more on selecting which documents to label than on which queries should be labeled. One exception to this (Long et al. 2010) uses expected loss optimization (ELO) to estimate which queries should be selected but is limited to rankers that predict absolute graded relevance. In this work, we demonstrate how to easily adapt ELO to work with any ranker and show that estimating expected loss in DCG is more robust than NDCG even when the final performance measure is NDCG.