Foundations of logic programming; (2nd extended ed.)
Foundations of logic programming; (2nd extended ed.)
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Relational Markov models and their application to adaptive web navigation
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
ACM SIGKDD Explorations Newsletter
PRL: A probabilistic relational language
Machine Learning
Learning probabilities for noisy first-order rules
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Hi-index | 0.00 |
Probabilistic Logic Learning (PLL) aims at learning probabilistic logical frameworks on the basis of data. Such frameworks combine expressive knowledge representation formalisms with reasoning mechanisms grounded in probability theory. Numerous frameworks have already addressed this issue. Therefore, there is a real need to compare these frameworks in order to be able to unify them. This paper provides a comparison of Relational Markov Models (RMMs) and Bayesian Logic Programs (BLPs). We demonstrate relations between BLPs' and RMMs' semantics, arguing that RMMs encode the same knowledge as a sub-class of BLPs. We fully describe a translation from a sub-class of BLPs into RMMs and provide complexity results which demonstrate an exponential expansion in formula size, showing that RMMs are less compact than their equivalent BLPs with respect to this translation. The authors are unaware of any more compact translation between BLPs and RMMs. A full implementation has already been realized, consisting of meta-interpreters for both BLPs and RMMs and a translation engine. The equality of BLPs' and corresponding RMMs' probability distributions has been proven on practical examples.