On the evaluation of IR systems
Information Processing and Management: an International Journal - Special issue on evaluation issues in information retrieval
Evaluating evaluation measure stability
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Information Retrieval Experiment
Information Retrieval Experiment
Coverage, relevance, and ranking: The impact of query operators on Web search engine results
ACM Transactions on Information Systems (TOIS)
Hi-index | 0.00 |
In this talk I summarize the components of a traditional laboratory-style evaluation experiment in information retrieval (as exemplified by TREC), and discusses some of the issues around this form of experiment. Some kinds of research questions fit very well into this framework; others much less easily. The major area of difficulty for the framework is the area concerned with the user interface and user information-seeking behaviour. I go on to discuss a series of experiments conducted at City University with the Okapi system, both of the traditional form and of a more user-oriented type. I then discuss the current TREC filtering track, which does not present quite such severe problems, but is nevertheless based on a simple model of how users might interact with the system; this has some effect on the experimental methodology.