Learning and inference in weighted logic with application to natural language processing

  • Authors:
  • Andrew Mccallum;Aron Culotta

  • Affiliations:
  • University of Massachusetts Amherst;University of Massachusetts Amherst

  • Venue:
  • Learning and inference in weighted logic with application to natural language processing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Over the past two decades, statistical machine learning approaches to natural language processing have largely replaced earlier logic-based systems. These probabilistic methods have proven to be well-suited to the ambiguity inherent in human communication. However, the shift to statistical modeling has mostly abandoned the representational advantages of logic-based approaches. For example, many language processing problems can be more meaningfully expressed in first-order logic rather than propositional logic. Unfortunately, most machine learning algorithms have been developed for propositional knowledge representations. In recent years, there have been a number of attempts to combine logical and probabilistic approaches to artificial intelligence. However, their impact on real-world applications has been limited because of serious scalability issues that arise when algorithms designed for propositional representations are applied to first-order logic representations. In this thesis, we explore approximate learning and inference algorithms that are tailored for higher-order representations, and demonstrate that this synthesis of probability and logic can significantly improve the accuracy of several language processing systems.