Evaluating relevant in context: document retrieval with a twist

  • Authors:
  • Jaap Kamps;Mounia Lalmas;Jovan Pehcevski

  • Affiliations:
  • University of Amsterdam, Amsterdam, Netherlands;Queen Mary: University of London, London, United Kingdom;INRIA Rocquencourt, Le Chesnay, France

  • Venue:
  • SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Relevant in Context retrieval task is document or article retrieval with a twist, where not only the relevant articles should be retrieved but also the relevant information within each article (captured by a set of XML elements) should be correctly identified. Our main research question is: how to evaluate the Relevant in Context task? We propose a generalized average precision measure that meets two main requirements: i) the score reflects the ranked list of articles inherent in the result list, and at the same time ii) the score also reflects how well the retrieved information per article (i.e., the set of elements) corresponds to the relevant information. The resulting measure was used at INEX 2006.