Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Elements of information theory
Elements of information theory
Learning and robust learning of product distributions
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Approximation algorithms for NP-hard problems
Machine Learning - Special issue on learning with probabilistic representations
The Sample Complexity of Learning Fixed-Structure Bayesian Networks
Machine Learning - Special issue on learning with probabilistic representations
ECSQAU Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty
Learning causal trees from dependence information
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
Learning Markov networks: maximum bounded tree-width graphs
SODA '01 Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
Maximum likelihood bounded tree-width Markov networks
Artificial Intelligence
A Formalism for Building Causal Polytree Structures Using Data Distributions
ISMIS '00 Proceedings of the 12th International Symposium on Foundations of Intelligent Systems
Complexity of probabilistic reasoning in directed-path singly-connected Bayes networks
Artificial Intelligence
PAC-learning bounded tree-width graphical models
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Large-Sample Learning of Bayesian Networks is NP-Hard
The Journal of Machine Learning Research
A formal approach to using data distributions for building causal polytree structures
Information Sciences—Informatics and Computer Science: An International Journal
A New Singly Connected Network Classifier based on Mutual Information
Intelligent Data Analysis
Learning Factor Graphs in Polynomial Time and Sample Complexity
The Journal of Machine Learning Research
Finding a path is harder than finding a tree
Journal of Artificial Intelligence Research
Efficient learning of Bayesian network classifiers: an extension to the TAN classifier
AI'07 Proceedings of the 20th Australian joint conference on Advances in artificial intelligence
An Efficient Algorithm for Learning Bayesian Networks from Data
Fundamenta Informaticae - From Mathematical Beauty to the Truth of Nature: to Jerzy Tiuryn on his 60th Birthday
Discriminative Learning of Bayesian Networks via Factorized Conditional Log-Likelihood
The Journal of Machine Learning Research
Finding optimal bayesian networks
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Maximum likelihood bounded tree-width Markov networks
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Large-sample learning of bayesian networks is NP-hard
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Learning bayesian networks does not have to be NP-Hard
MFCS'06 Proceedings of the 31st international conference on Mathematical Foundations of Computer Science
Review: learning bayesian networks: Approaches and issues
The Knowledge Engineering Review
Global optimization with the gaussian polytree EDA
MICAI'11 Proceedings of the 10th international conference on Artificial Intelligence: advances in Soft Computing - Volume Part II
Stochastic techniques in influence diagrams for learning bayesian network structure
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Parameterized complexity results for exact bayesian network structure learning
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We consider the task of learning the maximum-likelihood polytree from data. Our first result is a performance guarantee establishing that the optimal branching (or Chow-Liu tree), which can be computed very easily, constitutes a good approximation to the best polytree. We then show that it is not possible to do very much better, since the learning problem is NP-hard even to approximately solve within some constant factor.