Relevance judgments between TREC and Non-TREC assessors

  • Authors:
  • Azzah Al-Maskari;Mark Sanderson;Paul Clough

  • Affiliations:
  • University of Sheffield, Sheffield, United Kngdm;University of Sheffield, Sheffield, United Kngdm;University of Sheffield, Sheffield, United Kngdm

  • Venue:
  • Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates the agreement of relevance assessments between official TREC judgments and those generated from an interactive IR experiment. Results show that 63% of documents judged relevant by our users matched official TREC judgments. Several factors contributed to differences in the agreements: the number of retrieved relevant documents; the number of relevant documents judged; system effectiveness per topic and the ranking of relevant documents.