Elicitation of term relevance feedback: an investigation of term source and context

  • Authors:
  • Diane Kelly;Xin Fu

  • Affiliations:
  • University of North Carolina, Chapel Hill, NC;University of North Carolina, Chapel Hill, NC

  • Venue:
  • SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Term relevance feedback has had a long history in information retrieval. However, research on interactive term relevance feedback has yielded mixed results. In this paper, we investigate several aspects related to the elicitation of term relevance feedback: the display of document surrogates, the technique for identifying or selecting terms, and sources of expansion terms. We conduct a between subjects experiment (n=61) of three term relevance feedback interfaces using the 2005 TREC HARD collection, and evaluate each interface with respect to query length and retrieval performance. Results demonstrate that queries created with each experimental interface significantly outperformed corresponding baseline queries, even though there were no differences in performance between interface conditions. Results also demonstrate that pseudo-relevance feedback runs outperformed both baseline and experimental runs as assessed by recall-oriented measures, but that user-generated terms improved precision.