Markov logic

  • Authors:
  • Pedro Domingos;Stanley Kok;Daniel Lowd;Hoifung Poon;Matthew Richardson;Parag Singla

  • Affiliations:
  • Department of Computer Science and Engineering, University of Washington, Seattle, WA;Department of Computer Science and Engineering, University of Washington, Seattle, WA;Department of Computer Science and Engineering, University of Washington, Seattle, WA;Department of Computer Science and Engineering, University of Washington, Seattle, WA;Microsoft Research, Redmond, WA;Department of Computer Science and Engineering, University of Washington, Seattle, WA

  • Venue:
  • Probabilistic inductive logic programming
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most real-world machine learning problems have both statistical and relational aspects. Thus learners need representations that combine probability and relational logic. Markov logic accomplishes this by attaching weights to first-order formulas and viewing them as templates for features of Markov networks. Inference algorithms for Markov logic draw on ideas from satisfiability, Markov chain Monte Carlo and knowledge-based model construction. Learning algorithms are based on the conjugate gradient algorithm, pseudo-likelihood and inductive logic programming. Markov logic has been successfully applied to problems in entity resolution, link prediction, information extraction and others, and is the basis of the open-source Alchemy system.