Robust textual inference via learning and abductive reasoning

  • Authors:
  • Rajat Raina;Andrew Y. Ng;Christopher D. Manning

  • Affiliations:
  • Computer Science Department, Stanford University, Stanford, CA;Computer Science Department, Stanford University, Stanford, CA;Computer Science Department, Stanford University, Stanford, CA

  • Venue:
  • AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a system for textual inference (the task of inferring whether a sentence follows from another text) that uses learning and a logical-formula semantic representation of the text. More precisely, our system begins by parsing and then transforming sentences into a logical formula-like representation similar to the one used by (Harabagiu et al., 2000). An abductive theorem prover then tries to find the minimum "cost" set of assumptions necessary to show that one statement follows from the other. These costs reflect how likely different assumptions are, and are learned automatically using information from syntactic/semantic features and from linguistic resources such as WordNet. If one sentence follows from the other given only highly plausible, low cost assumptions, then we conclude that it can be inferred. Our approach can be viewed as combining statistical machine learning and classical logical reasoning, in the hope of marrying the robustness and scalability of learning with the preciseness and elegance of logical theorem proving. We give experimental results from the recent PASCAL RTE 2005 challenge competition on recognizing textual inferences, where a system using this inference algorithm achieved the highest confidence weighted score.