Questionnaires for eliciting evaluation data from users of interactive question answering systems

  • Authors:
  • D. Kelly;P. b. Kantor;E. l. Morse;J. Scholtz;Y. Sun

  • Affiliations:
  • University of north carolina, chapel hill, nc 27599-3360, usa e-mail: dianek@email.unc.edu;Rutgers university, new brunswick, nj 08901, usa e-mail: kantor@scils.rutgers.edu;National institute of standards & technology, gaithersburg, md 20899, usa e-mail: emile.morse@nist.gov;Pacific northwest national laboratory, richland, wa 99352, usa e-mail: jean.scholtz@pnl.gov;University at buffalo, the state university of new york, buffalo, ny 14260, usa e-mail: sun3@buffalo.edu

  • Venue:
  • Natural Language Engineering
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluating interactive question answering (QA) systems with real users can be challenging because traditional evaluation measures based on the relevance of items returned are difficult to employ since relevance judgments can be unstable in multi-user evaluations. The work reported in this paper evaluates, in distinguishing among a set of interactive QA systems, the effectiveness of three questionnaires: a Cognitive Workload Questionnaire (NASA TLX), and Task and System Questionnaires customized to a specific interactive QA application. These Questionnaires were evaluated with four systems, seven analysts, and eight scenarios during a 2-week workshop. Overall, results demonstrate that all three Questionnaires are effective at distinguishing among systems, with the Task Questionnaire being the most sensitive. Results also provide initial support for the validity and reliability of the Questionnaires.