Query performance analyser -: a web-based tool for IR research and instruction

  • Authors:
  • Eero Sormunen;Sakari Hokkanen;Petteri Kangaslampi;Petri Pyy;Bemmu Sepponen

  • Affiliations:
  • University of Tampere, Finland;University of Tampere, Finland;University of Tampere, Finland;University of Tampere, Finland;University of Tampere, Finland

  • Venue:
  • SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Interactive Query Performance Analyser (QPA) for information retrieval systems is a Web-based tool for analysing and comparing the performance of individual queries. On top of a standard test collection, it gives an instant visualisation of the performance achieved in a given search topic by any user-generated query. In addition to experimental IR research, QPA can be used in user training to demonstrate the characteristics of and compare differences between IR systems and searching strategies. The first prototype (versions 3.0 and 3.5) of the Query Performance Analyser was developed at the Department of Information Studies, University of Tampere, to serve as a tool for rapid query performance analysis, comparison and visualisation [4,5]. Later, it has been applied to interactive optimisation of queries [2,3]. The analyser has served also in learning environments for IR [1].The demonstration is based on the newest version of the Query Performance Analyser (v. 5.1). It is interfaced to a traditional Boolean IR system (TRIP) and a probabilistic IR system (Inquery) providing access to the TREC collection and two Finnish test collections. Version 5.1 supports multigraded relevance scales, new types of performance visualisations, and query conversions based on mono- and multi-lingual dictionaries. The motivation in developing the analyser is to emphasise the necessity of analysing the behaviour of individual queries. Information retrieval experiments usually measure the average effectiveness of IR methods developed. The analysis of individual queries is neglected although test results may contain individual test topics where general findings do not hold. For the real user of an IR system, the study of variation in results is even more important than averages.