On Feature Selection: Learning with Exponentially Many Irrelevant Features as Training Examples
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Equivalence and synthesis of causal models
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Enumerating Markov Equivalence Classes of Acyclic Digraph Models
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Learning equivalence classes of bayesian-network structures
The Journal of Machine Learning Research
Optimal structure identification with greedy search
The Journal of Machine Learning Research
Tractable learning of large Bayes net structures from sparse data
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Bayesian Networks
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
Causal inference and causal explanation with background knowledge
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Strong completeness and faithfulness in Bayesian networks
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Strong faithfulness and uniform consistency in causal inference
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
A hybrid Bayesian network learning method for constructing gene networks
Computational Biology and Chemistry
A Recursive Method for Structural Learning of Directed Acyclic Graphs
The Journal of Machine Learning Research
Structure learning with independent non-identically distributed data
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
The Journal of Machine Learning Research
Learning Gaussian graphical models of gene networks with false discovery rate control
EvoBIO'08 Proceedings of the 6th European conference on Evolutionary computation, machine learning and data mining in bioinformatics
Introduction to Causal Inference
The Journal of Machine Learning Research
Influence of Prior Knowledge in Constraint-Based Learning of Gene Regulatory Networks
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Analyzing supply chain operation models with the PC-algorithm and the neural network
Expert Systems with Applications: An International Journal
DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model
The Journal of Machine Learning Research
Review: learning bayesian networks: Approaches and issues
The Knowledge Engineering Review
Improving the performance of heuristic algorithms based on causal inference
MICAI'11 Proceedings of the 10th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
A two-stage analysis of the influences of employee alignment on effecting business-IT alignment
Decision Support Systems
Maximum Likelihood Estimation Over Directed Acyclic Gaussian Graphs
Statistical Analysis and Data Mining
Stable graphical model estimation with Random Forests for discrete, continuous, and mixed variables
Computational Statistics & Data Analysis
High-dimensional Gaussian graphical model selection: walk summability and local separation criterion
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Adaptive thresholding in structure learning of a Bayesian network
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Sub-local constraint-based learning of Bayesian networks using a joint dependence criterion
The Journal of Machine Learning Research
PC algorithm for nonparanormal graphical models
The Journal of Machine Learning Research
Two optimal strategies for active learning of causal models from interventional data
International Journal of Approximate Reasoning
Hi-index | 0.00 |
We consider the PC-algorithm (Spirtes et al., 2000) for estimating the skeleton and equivalence class of a very high-dimensional directed acyclic graph (DAG) with corresponding Gaussian distribution. The PC-algorithm is computationally feasible and often very fast for sparse problems with many nodes (variables), and it has the attractive property to automatically achieve high computational efficiency as a function of sparseness of the true underlying DAG. We prove uniform consistency of the algorithm for very high-dimensional, sparse DAGs where the number of nodes is allowed to quickly grow with sample size n, as fast as O(na) for any 0 a n. We also demonstrate the PC-algorithm for simulated data.