The Complexity of Translating BLPs to RMMs

  • Authors:
  • Stephen Muggleton;Niels Pahlavi

  • Affiliations:
  • Department of Computing, Imperial College London, 180 Queen's Gate, London SW7 2BZ, UK;Department of Computing, Imperial College London, 180 Queen's Gate, London SW7 2BZ, UK

  • Venue:
  • Inductive Logic Programming
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Probabilistic Logic Learning (PLL) aims at learning probabilistic logical frameworks on the basis of data. Such frameworks combine expressive knowledge representation formalisms with reasoning mechanisms grounded in probability theory. Numerous frameworks have already addressed this issue. Therefore, there is a real need to compare these frameworks in order to be able to unify them. This paper provides a comparison of Relational Markov Models (RMMs) and Bayesian Logic Programs (BLPs). We demonstrate relations between BLPs' and RMMs' semantics, arguing that RMMs encode the same knowledge as a sub-class of BLPs. We fully describe a translation from a sub-class of BLPs into RMMs and provide complexity results which demonstrate an exponential expansion in formula size, showing that RMMs are less compact than their equivalent BLPs with respect to this translation. The authors are unaware of any more compact translation between BLPs and RMMs. A full implementation has already been realized, consisting of meta-interpreters for both BLPs and RMMs and a translation engine. The equality of BLPs' and corresponding RMMs' probability distributions has been proven on practical examples.