Approximate inference, structure learning and feature estimation in markov random fields

  • Authors:
  • Pradeep Ravikumar;John Lafferty

  • Affiliations:
  • Carnegie Mellon University;Carnegie Mellon University

  • Venue:
  • Approximate inference, structure learning and feature estimation in markov random fields
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Markov random fields (MRFs), or undirected graphical models, are graphical representations of probability distributions. Each graph represents a family of distributions—the nodes of the graph represent random variables, the edges encode independence assumptions, and weights over the edges and cliques specify a particular member of the family. There are three main classes of tasks within this framework: the first is to perform inference, given the graph structure and parameters and (clique) feature functions; the second is to estimate the graph structure and parameters from data, given the feature functions; the third is to estimate the feature functions themselves from data. Key inference subtasks include estimating the normalization constant (also called the partition function), event probability estimation, computing rigorous upper and lower bounds (interval guarantees), inference given only moment constraints, and computing the most probable configuration. The thesis addresses all of the above tasks and subtasks.