Natural logic for textual inference

  • Authors:
  • Bill MacCartney;Christopher D. Manning

  • Affiliations:
  • Stanford University;Stanford University

  • Venue:
  • RTE '07 Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents the first use of a computational model of natural logic---a system of logical inference which operates over natural language---for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.