Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
An algorithm for deciding if a set of observed independencies has a causal explanation
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
Introduction to algorithms
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Learning Bayesian networks from data: an information-theory based approach
Artificial Intelligence
Dependency networks for inference, collaborative filtering, and data visualization
The Journal of Machine Learning Research
Learning equivalence classes of bayesian-network structures
The Journal of Machine Learning Research
Gaussian MRF Rotation-Invariant Features for Image Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Large-Sample Learning of Bayesian Networks is NP-Hard
The Journal of Machine Learning Research
Probabilistic Conditional Independence Structures: With 42 Illustrations (Information Science and Statistics)
A Comparison of Algorithms for Inference and Learning in Probabilistic Graphical Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Towards scalable and data efficient learning of Markov boundaries
International Journal of Approximate Reasoning
Fundamenta Informaticae - Cognitive Informatics, Cognitive Computing, and Their Denotational Mathematical Foundations (I)
Graphical Models, Exponential Families, and Variational Inference
Graphical Models, Exponential Families, and Variational Inference
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Robust independence testing for constraint-based learning of causal structure
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Turbo decoding as an instance of Pearl's “belief propagation” algorithm
IEEE Journal on Selected Areas in Communications
Hi-index | 0.00 |
Learning Markov boundaries from data without having to learn a Bayesian network first can be viewed as a feature subset selection problem and has received much attention due to its significance in the wide applications of AI techniques. Popular constraint based methods suffer from high computational complexity and are usually unstable in spaces of high dimensionality. We propose a new perspective from matroid theory towards the discovery of Markov boundaries of random variable in the domain, and develop a learning algorithm which guarantees to recover the true Markov boundaries by a greedy learning algorithm. Then we use the precision matrix of the original distribution as a measure of independence to make our algorithm feasible in large scale problems, which is essentially an approximation of the probabilistic relations with Gaussians and can find possible variables in Markov boundaries with low computational complexity. Experimental results on standard Bayesian networks show that our analysis and approximation can efficiently and accurately identify Markov boundaries in complex networks from data.