Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Learning Bayesian networks: a unification for discrete and Gaussian domains
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
A Bayesian approach to learning causal networks
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Incorporating expert knowledge when learning Bayesian network structure: A medical case study
Artificial Intelligence in Medicine
Review: learning bayesian networks: Approaches and issues
The Knowledge Engineering Review
Robust inference of bayesian networks using speciated evolution and ensemble
ISMIS'05 Proceedings of the 15th international conference on Foundations of Intelligent Systems
Hi-index | 0.00 |
We report the use genetic algorithms (GAs) as a search mechanism for the discovery of linear causal models when using two Bayesian metrics for linear causal models, a Minimum Message Length (MML) metric [10] and a full posterior analysis (BGe) [3]. We also consider two structure priors over causal models, one giving all variable orderings for models with the same arc density equal prior probability (P1) and one assigning all causal structures with the same arc density equal priors (P2). Evaluated with Kullback-Leibler distance prior P2 tended to produce models closer to the true model than P1 for both metrics, with MML performing slightly better than BGe. By contrast, when using an evaluation metric that better reflects the nature of the causal discovery task, namely a metric that compares the results of predictive performance on the effect nodes in the discovered model P1 outperformed P2 in general, with MML and BGe discovering models of similar predictive performance at various sample sizes. This supports our conjecture that the P1 prior is more appropriate for causal discovery.