Neural networks and the bias/variance dilemma
Neural Computation
Machine Learning
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Minimal test collections for retrieval evaluation
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
A statistical method for system evaluation using incomplete judgments
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Hypothesis testing with incomplete relevance judgments
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
Importance weighted active learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Monte Carlo Strategies in Scientific Computing
Monte Carlo Strategies in Scientific Computing
Expected reciprocal rank for graded relevance
Proceedings of the 18th ACM conference on Information and knowledge management
Statistical Analysis of Bayes Optimal Subset Ranking
IEEE Transactions on Information Theory
Active evaluation of ranking functions based on graded relevance
Machine Learning
Active evaluation of ranking functions based on graded relevance (extended abstract)
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Evaluating the quality of ranking functions is a core task in web search and other information retrieval domains. Because query distributions and item relevance change over time, ranking models often cannot be evaluated accurately on held-out training data. Instead, considerable effort is spent on manually labeling the relevance of query results for test queries in order to track ranking performance. We address the problem of estimating ranking performance as accurately as possible on a fixed labeling budget. Estimates are based on a set of most informative test queries selected by an active sampling distribution. Query labeling costs depend on the number of result items as well as item-specific attributes such as document length. We derive cost-optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain (DCG) and Expected Reciprocal Rank (ERR). Experiments on web search engine data illustrate significant reductions in labeling costs.