Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Chain graphs and symmetric associations
Learning in graphical models
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Equivalence and synthesis of causal models
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Optimal structure identification with greedy search
The Journal of Machine Learning Research
Iterative conditional fitting for Gaussian ancestral graph models
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Homogeneity, selection, and the faithfulness condition
Minds and Machines
Learning Bayesian Networks
Detection of Unfaithfulness and Robust Causal Inference
Minds and Machines
Causal Reasoning with Ancestral Graphs
The Journal of Machine Learning Research
Causal inference and causal explanation with background knowledge
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Causal Reasoning with Ancestral Graphs
The Journal of Machine Learning Research
Towards integrative causal analysis of heterogeneous data sets and studies
The Journal of Machine Learning Research
Bayesian probabilities for constraint-based causal discovery
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.01 |
Causal discovery becomes especially challenging when the possibility of latent confounding and/or selection bias is not assumed away. For this task, ancestral graph models are particularly useful in that they can represent the presence of latent confounding and selection effect, without explicitly invoking unobserved variables. Based on the machinery of ancestral graphs, there is a provably sound causal discovery algorithm, known as the FCI algorithm, that allows the possibility of latent confounders and selection bias. However, the orientation rules used in the algorithm are not complete. In this paper, we provide additional orientation rules, augmented by which the FCI algorithm is shown to be complete, in the sense that it can, under standard assumptions, discover all aspects of the causal structure that are uniquely determined by facts of probabilistic dependence and independence. The result is useful for developing any causal discovery and reasoning system based on ancestral graph models.