Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Learning Bayesian networks from data: an information-theory based approach
Artificial Intelligence
Equivalence and synthesis of causal models
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Large-Sample Learning of Bayesian Networks is NP-Hard
The Journal of Machine Learning Research
Learning Bayesian Networks
Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Bounding the false discovery rate in local Bayesian network learning
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
The Journal of Machine Learning Research
Learning the structure of bayesian networks with constraint satisfaction
Learning the structure of bayesian networks with constraint satisfaction
Learning bayesian network structure from massive datasets: the «sparse candidate« algorithm
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
A Bayesian multiresolution independence test for continuous variables
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Causal inference and causal explanation with background knowledge
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Strong faithfulness and uniform consistency in causal inference
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
Constraint-based learning of Bayesian networks (BN) from limited data can lead to multiple testing problems when recovering dense areas of the skeleton and to conflicting results in the orientation of edges. In this paper, we present a new constraint-based algorithm, light mutual min (LMM) for improved accuracy of BN learning from small sample data. LMM improves the assessment of candidate edges by using a ranking criterion that considers conditional independence on neighboring variables at both sides of an edge simultaneously. The algorithm also employs an adaptive relaxation of constraints that, selectively, allows some nodes not to condition on some neighbors. This relaxation aims at reducing the incorrect rejection of true edges connecting high degree nodes due to multiple testing. LMM additionally incorporates a new criterion for ranking v-structures that is used to recover the completed partially directed acyclic graph (CPDAG) and to resolve conflicting v-structures, a common problem in small sample constraint-based learning. Using simulated data, each of these components of LMM is shown to significantly improve network inference compared to commonly applied methods when learning from limited data, including more accurate recovery of skeletons and CPDAGs compared to the PC, MaxMin, and MaxMin hill climbing algorithms. A proof of asymptotic correctness is also provided for LMM for recovering the correct skeleton and CPDAG.