Selecting a subset of queries for acquisition of further relevance judgements

  • Authors:
  • Mehdi Hosseini;Ingemar J. Cox;Natasa Milic-Frayling;Vishwa Vinay;Trevor Sweeting

  • Affiliations:
  • University College London;University College London;Microsoft Research Cambridge;Microsoft Research Cambridge;University College London

  • Venue:
  • ICTIR'11 Proceedings of the Third international conference on Advances in information retrieval theory
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Assessing the relative performance of search systems requires the use of a test collection with a pre-defined set of queries and corresponding relevance assessments. The state-of-the-art process of constructing test collections involves using a large number of queries and selecting a set of documents, submitted by a group of participating systems, to be judged per query. However, the initial set of judgments may be insufficient to reliably evaluate the performance of future as yet unseen systems. In this paper, we propose a method that expands the set of relevance judgments as new systems are being evaluated. We assume that there is a limited budget to build additional relevance judgements. From the documents retrieved by the new systems we create a pool of unjudged documents. Rather than uniformly distributing the budget across all queries, we first select a subset of queries that are effective in evaluating systems and then uniformly allocate the budget only across these queries. Experimental results on TREC 2004 Robust track test collection demonstrate the superiority of this budget allocation strategy.