Comparative analysis of clicks and judgments for IR evaluation

  • Authors:
  • Jaap Kamps;Marijn Koolen;Andrew Trotman

  • Affiliations:
  • University of Amsterdam, Amsterdam, The Netherlands;University of Amsterdam, Amsterdam, The Netherlands;University of Otago, Dunedin, New Zealand

  • Venue:
  • Proceedings of the 2009 workshop on Web Search Click Data
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Queries and click-through data taken from search engine transaction logs is an attractive alternative to traditional test collections, due to its volume and the direct relation to end-user querying. The overall aim of this paper is to answer the question: How does click-through data differ from explicit human relevance judgments in information retrieval evaluation? We compare a traditional test collection with manual judgments to transaction log based test collections---by using queries as topics and subsequent clicks as pseudo-relevance judgments for the clicked results. Specifically, we investigate the following two research questions: Firstly, are there significant differences between clicks and relevance judgments. Earlier research suggests that although clicks and explicit judgments show reasonable agreement, clicks are different from static absolute relevance judgments. Secondly, are there significant differences between system ranking based on clicks and based on relevance judgments? This is an open question, but earlier research suggests that comparative evaluation in terms of system ranking is remarkably robust.