Topic prediction based on comparative retrieval rankings

  • Authors:
  • Chris Buckley

  • Affiliations:
  • Sabir Research, Inc., Gaithersburg, Maryland

  • Venue:
  • Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2004
  • Predicting query performance

    SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new measure, AnchorMap, is introduced to evaluate how close two document retrieval rankings are to each other. It is shown that AnchorMap scores, when run on a set of initial ranked document lists from 8 different systems, are very highly correlated with categorization of topics as easy or hard, and separately, are highly correlated with those topics on which blind feedback works. In another experiment, AnchorMap is used to compare the initial ranked document list from a single system against the ranked document list from that system after blind feedback. Again, high AnchorMap values are highly correlated with both topic difficulty and successful application of blind feedback. Both experiments are examples of using properties of a topic which are independent of relevance information to predict the actual performance of IR systems on the topic. Initial experiments to attempt to improve retrieval performance based upon AnchorMap failed; the causes for failure are discussed.