The TREC 2005 robust track

  • Authors:
  • Ellen M. Voorhees

  • Affiliations:
  • National Institute of Standards and Technology Gaithersburg, MD

  • Venue:
  • ACM SIGIR Forum
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The TREC robust retrieval track explores methods for improving the consistency of retrieval technology by focusing on poorly performing topics. The retrieval task in the track is a traditional ad hoc retrieval task where the evaluation methodology emphasizes a system's least effective topics. The 2005 edition of the track used 50 topics that had been demonstrated to be difficult on one document collection, and ran those topics on a different document collection. Relevance information from the first collection could be exploited in producing a query for the second collection, if desired. As in previous years, the most effective retrieval strategy was to expand queries using terms derived from additional corpora. The relative difficulty of topics differed across the two document sets.