Exploiting causal independence in Markov logic networks: combining undirected and directed models

  • Authors:
  • Sriraam Natarajan;Tushar Khot;Daniel Lowd;Prasad Tadepalli;Kristian Kersting;Jude Shavlik

  • Affiliations:
  • University of Wisconsin-Madison;University of Wisconsin-Madison;University of Oregon;Oregon State University;Fraunhofer IAIS;University of Wisconsin-Madison

  • Venue:
  • ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

A new method is proposed for compiling causal independencies into Markov logic networks (MLNs). An MLN can be viewed as compactly representing a factorization of a joint probability into the product of a set of factors guided by logical formulas. We present a notion of causal independence that enables one to further factorize the factors into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The causal independence lets us specify the factor in terms of weighted, directed clauses and operators, such as "or", "sum" or "max", on the contribution of the variables involved in the factors, hence combining both undirected and directed knowledge. Our experimental evaluations shows that making use of the finer-grain factorization provided by causal independence can improve quality of parameter learning in MLNs.