Capturing a taxonomy of failures during automatic interpretation of questions posed in natural language

  • Authors:
  • Peter Z. Yeh;Shaw-Yi Chaw;James J. Fan;Dan G. Tecuci

  • Affiliations:
  • Accenture Technology Labs, Palo Alto, CA;University of Texas at Austin, Austin, TX;IBM Research, Hawthrone, NY;University of Texas at Austin, Austin, TX

  • Venue:
  • Proceedings of the 4th international conference on Knowledge capture
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important problem in artificial intelligence is capturing, from natural language, formal representationsallthat can be used by a reasoner to compute an answer. Many researchers have studied this problem by developing algorithms addressing specific phenomena in natural language interpretation, but few have studied (or cataloged) the types of failures associated with this problem. Knowledgeallof these failures can help researchers by providing a roadallmap of open research problems and help practitioners by providing a checklist of issues to address in order to build systems that can achieve good performance on this problem.allIn this paper, we present a study -- conducted in the context of the Halo Project -- cataloging the types of failures that occur when capturing knowledge from naturallanguage. We identified the categories of failures by examining a corpus of questions posed byallnaive usersallto a knowledge based question answering system and empirically demonstrated the generality of ourallcategorizations. We also describe available technologies that can address some of the failures we have identified.