Using cross-evaluation to evaluate interactive QA systems

  • Authors:
  • Ying Sun;Paul B. Kantor;Emile L. Morse

  • Affiliations:
  • 548 Baldy Hall, University at Buffalo, Buffalo, New York 14260;4 Huntington St., Rutgers University, New Brunswick, New Jersey 08901;100 Bureau Drive, Stop 8940, National Institute of Standards & Technology, Gaithersburg, Maryland 20899

  • Venue:
  • Journal of the American Society for Information Science and Technology
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article, we report on an experiment to assess the possibility of rigorous evaluation of interactive question-answering (QA) systems using the cross-evaluation method. This method takes into account the effects of tasks and context, and of the users of the systems. Statistical techniques are used to remove these effects, isolating the effect of the system itself. The results show that this approach yields meaningful measurements of the impact of systems on user task performance, using a surprisingly small number of subjects and without relying on predetermined judgments of the quality, or of the relevance of materials. We conclude that the method is indeed effective for comparing end-to-end QA systems, and for comparing interactive systems with high efficiency. © 2011 Wiley Periodicals, Inc.