Evaluation in information retrieval

  • Authors:
  • Stephen Robertson

  • Affiliations:
  • Microsoft Research Ltd, Cambridge, UK

  • Venue:
  • Lectures on information retrieval
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this talk I summarize the components of a traditional laboratory-style evaluation experiment in information retrieval (as exemplified by TREC), and discusses some of the issues around this form of experiment. Some kinds of research questions fit very well into this framework; others much less easily. The major area of difficulty for the framework is the area concerned with the user interface and user information-seeking behaviour. I go on to discuss a series of experiments conducted at City University with the Okapi system, both of the traditional form and of a more user-oriented type. I then discuss the current TREC filtering track, which does not present quite such severe problems, but is nevertheless based on a simple model of how users might interact with the system; this has some effect on the experimental methodology.