Learning, logic, and probability: a unified view

  • Authors:
  • Pedro Domingos

  • Affiliations:
  • Department of Computer Science and Engineering, University of Washington, Seattle, WA

  • Venue:
  • IBERAMIA-SBIA'06 Proceedings of the 2nd international joint conference, and Proceedings of the 10th Ibero-American Conference on AI 18th Brazilian conference on Advances in Artificial Intelligence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

AI systems must be able to learn, reason logically, and handle uncertainty.While much research has focused on each of these goals individually, only recently have we begun to attempt to achieve all three at once. In this talk, I describe Markov logic, a representation that combines first-order logic and probabilistic graphical models, and algorithms for learning and inference in it. Syntactically, Markov logic is first-order logic augmented with a weight for each formula. Semantically, a set of Markov logic formulas represents a probability distribution over possible worlds, in the form of a Markov network with one feature per grounding of a formula in the set, with the corresponding weight. Formulas are learned from relational databases using inductive logic programming techniques.Weights can be learned either generatively (using pseudo-likelihood optimization) or discriminatively (using a voted perceptron algorithm). Inference is performed by a weighted satisfiability solver or by Markov chain Monte Carlo, operating on the minimal subset of the ground network required for answering the query. Experiments in link prediction, entity resolution and other problems illustrate the promise of this approach.