Joint inference of entities, relations, and coreference

  • Authors:
  • Sameer Singh;Sebastian Riedel;Brian Martin;Jiaping Zheng;Andrew McCallum

  • Affiliations:
  • University of Massachusetts, Amherst, MA, USA;University College London, London, United Kingdom;University of Massachusetts, Amherst, MA, USA;University of Massachusetts, Amherst, MA, USA;University of Massachusetts, Amherst, MA, USA

  • Venue:
  • Proceedings of the 2013 workshop on Automated knowledge base construction
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although joint inference is an effective approach to avoid cascading of errors when inferring multiple natural language tasks, its application to information extraction has been limited to modeling only two tasks at a time, leading to modest improvements. In this paper, we focus on the three crucial tasks of automated extraction pipelines: entity tagging, relation extraction, and coreference. We propose a single, joint graphical model that represents the various dependencies between the tasks, allowing flow of uncertainty across task boundaries. Since the resulting model has a high tree-width and contains a large number of variables, we present a novel extension to belief propagation that sparsifies the domains of variables during inference. Experimental results show that our joint model consistently improves results on all three tasks as we represent more dependencies. In particular, our joint model obtains 12% error reduction on tagging over the isolated models.