Contextual Language Models For Ranking Answers To Natural Language Definition Questions

  • Authors:
  • Alejandro Figueroa;John Atkinson

  • Affiliations:
  • 1 Yahoo! Research Latin America, Santiago, Chile 2Department of Computer Sciences, Universidad de Concepcion, Concepci ...; 1 Yahoo! Research Latin America, Santiago, Chile 2Department of Computer Sciences, Universidad de Concepcion, Concepci ...

  • Venue:
  • Computational Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Question–answering systems make good use of knowledge bases (KBs, e.g., Wikipedia) for responding to definition queries. Typically, systems extract relevant facts from articles regarding the question across KBs, and then they are projected into the candidate answers. However, studies have shown that the performance of this kind of method suddenly drops, whenever KBs supply narrow coverage. This work describes a new approach to deal with this problem by constructing context models for scoring candidate answers, which are, more precisely, statistical n-gram language models inferred from lexicalized dependency paths extracted from Wikipedia abstracts. Unlike state-of-the-art approaches, context models are created by capturing the semantics of candidate answers (e.g., “novel,”“singer,”“coach,” and “city”). This work is extended by investigating the impact on context models of extra linguistic knowledge such as part-of-speech tagging and named-entity recognition. Results showed the effectiveness of context models as n-gram lexicalized dependency paths and promising context indicators for the presence of definitions in natural language texts. © 2012 Wiley Periodicals, Inc.