Web searching with entity mining at query time
IRFC'12 Proceedings of the 5th conference on Multidisciplinary Information Retrieval
Hi-index | 0.00 |
A testbed for the evaluation of interactive information access consists of three components: (1) a collection of documents, (2) a set of tasks/usages, and (3) a system. Whereas in most evaluation initiatives only the first two components are provided by the organizers, evaluation of interactive retrieval requires some standardization on the system side, too. Starting with the INEX interactive track in 2004, our group has developed an infrastructure for this type of evaluations. With the Daffodil (now ezDL) framework, we provided an experimental framework for interactive retrieval, that allows for easy exchange or extension of the system components. Moreover, this framework also contains tools for organizing lab experiments: Besides extensive logging (including the possibility to exploit eyetracking data), the system also allows for presenting questionnaires at all stages of a search session (pre-/post-task/session), as well as the scheduling of search tasks and monitoring task time. While the search frontend can be employed decentrally at participating sites, all data is collected in a centralized database, from where it can either be exported in XML format or directly analyzed via appropriate evaluation routines.