AutoEval: an evaluation methodology for evaluating query suggestions using query logs

  • Authors:
  • M-Dyaa Albakour;Udo Kruschwitz;Nikolaos Nanas;Yunhyong Kim;Dawei Song;Maria Fasli;Anne De Roeck

  • Affiliations:
  • University of Essex, Colchester, UK;University of Essex, Colchester, UK;Centre for Research and Technology - Thessaly, Greece;Robert Gordon University, Aberdeen, UK;Robert Gordon University, Aberdeen, UK;University of Essex, Colchester, UK;Open University, Milton Keynes, UK

  • Venue:
  • ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

User evaluations of search engines are expensive and not easy to replicate. The problem is even more pronounced when assessing adaptive search systems, for example system-generated query modification suggestions that can be derived from past user interactions with a search engine. Automatically predicting the performance of different modification suggestion models before getting the users involved is therefore highly desirable. AutoEval is an evaluation methodology that assesses the quality of query modifications generated by a model using the query logs of past user interactions with the system. We present experimental results of applying this methodology to different adaptive algorithms which suggest that the predicted quality of different algorithms is in line with user assessments. This makes AutoEval a suitable evaluation framework for adaptive interactive search engines