Learning Probabilistic Models of Relational Structure
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
PRL: A probabilistic relational language
Machine Learning
Parameter learning for relational Bayesian networks
Proceedings of the 24th international conference on Machine learning
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Learning first-order probabilistic models with combining rules
Annals of Mathematics and Artificial Intelligence
Learning probabilities for noisy first-order rules
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Exploiting causal independence in Bayesian network inference
Journal of Artificial Intelligence Research
Model-theoretic expressivity analysis
Probabilistic inductive logic programming
Markov Logic: An Interface Layer for Artificial Intelligence
Markov Logic: An Interface Layer for Artificial Intelligence
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Logical bayesian networks and their relation to other probabilistic logical models
ILP'05 Proceedings of the 15th international conference on Inductive Logic Programming
Hi-index | 0.01 |
A new method is proposed for compiling causal independencies into Markov logic networks (MLNs). An MLN can be viewed as compactly representing a factorization of a joint probability into the product of a set of factors guided by logical formulas. We present a notion of causal independence that enables one to further factorize the factors into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The causal independence lets us specify the factor in terms of weighted, directed clauses and operators, such as "or", "sum" or "max", on the contribution of the variables involved in the factors, hence combining both undirected and directed knowledge. Our experimental evaluations shows that making use of the finer-grain factorization provided by causal independence can improve quality of parameter learning in MLNs.