On real-time ad-hoc retrieval evaluation

  • Authors:
  • Stephen E. Robertson;Evangelos Kanoulas

  • Affiliations:
  • Microsoft Research, Cambridge, United Kingdom;University of Sheffield, Sheffield, United Kingdom

  • Venue:
  • SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Lab-based evaluations typically assess the quality of a retrieval system with respect to its ability to retrieve documents that are relevant to the information need of an end user. In a real-time search task however users not only wish to retrieve the most relevant items but the most recent as well. The current evaluation framework is not adequate to assess the ability of a system to retrieve both recent and relevant items, and the one proposed in the recent TREC Microblog Track has certain flaws that quickly became apparent to the organizers. In this poster, we redefine the experiment for a real-time ad-hoc search task, by setting new submission requirements for the submitted systems/runs, proposing metrics to be used in evaluating the submissions, and suggesting a pooling strategy to be used to gather relevance judgments towards the computation of the described metrics. The proposed task can indeed assess the quality of a retrieval system with regard to retrieving both relevant and timely information.