Measuring the reusability of test collections

  • Authors:
  • Ben Carterette;Evgeniy Gabrilovich;Vanja Josifovski;Donald Metzler

  • Affiliations:
  • University of Delaware, Newark, DE, USA;Yahoo! Research, Santa Clara, CA, USA;Yahoo! Research, Santa Clara, CA, USA;Yahoo! Research, Santa Clara, CA, USA

  • Venue:
  • Proceedings of the third ACM international conference on Web search and data mining
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

While test collection construction is a time-consuming and expensive process, the true cost is amortized by reusing the collection over hundreds or thousands of experiments. Some of these experiments may involve systems that retrieve documents not judged during the initial construction phase, and some of these systems may be "hard" to evaluate: depending on which judgments are missing and which judged documents were retrieved, the experimenter's confidence in an evaluation could potentially be very low. We propose two methods for quantifying the reusability of a test collection for evaluating new systems. The proposed methods provide simple yet highly effective tests for determining whether an existing set of judgments is useful for evaluating a new system. Empirical evaluations using TREC datasets confirm the usefulness of our proposed reusability measures. In particular, we show that our methods can reliably estimate confidence intervals that are indicative of collection reusability.