Default reasoning in natural language processing

  • Authors:
  • Uri Zernik;Allen Brown

  • Affiliations:
  • Artificial Intelligence Program, GE, Research and Development Center;Systems Sciences Laboratory, Xerox, Webster Research Center

  • Venue:
  • COLING '88 Proceedings of the 12th conference on Computational linguistics - Volume 2
  • Year:
  • 1988

Quantified Score

Hi-index 0.00

Visualization

Abstract

In natural language, as in other computational task domains it is important to operate by default assumptions. First, many constraints required for constraint propagation are initially unspecified. Second, in highly ambiguous tasks such as text analysis, ambiguity can be reduced by considering more plausible scenarios first. Default reasoning is problematic for first-order logic when allowing non-monotonic inferences. Whereas in monotonic logic facts can only be asserted, in non-monotonic logic a system must be maintained consistent even as previously assumed defaults are being retracted.Non-monotonicty is pervasive in natural language due to the serial nature of utterances. When reading text left-to-right, it happens that default assumptions made early in the sentence must be withdrawn as reading proceeds. Truth maintenance, which accounts for non-monotonic inferences, can resolve this issue and address important linguistic phenomena. In this paper we describe how in NMG (Non-Monotonic Grammar), by monitoring a logic parser, a truth maintenance system can significantly, enhance the parser's capabilities.