Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Polynomial-time approximation algorithms for the Ising model
SIAM Journal on Computing
An introduction to computational learning theory
An introduction to computational learning theory
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonlinear Markov networks for continuous variables
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Learning Markov networks: maximum bounded tree-width graphs
SODA '01 Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
Time and sample efficient discovery of Markov blankets and direct causal relations
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Sparse graphical models for exploring gene expression data
Journal of Multivariate Analysis
Discriminative Learning of Markov Random Fields for Segmentation of 3D Scan Data
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Learning Factor Graphs in Polynomial Time and Sample Complexity
The Journal of Machine Learning Research
A Robust Procedure For Gaussian Graphical Model Search From Microarray Data With p Larger Than n
The Journal of Machine Learning Research
Graphical Models in Applied Multivariate Statistics
Graphical Models in Applied Multivariate Statistics
Journal of Artificial Intelligence Research
Learning Gaussian graphical models of gene networks with false discovery rate control
EvoBIO'08 Proceedings of the 6th European conference on Evolutionary computation, machine learning and data mining in bioinformatics
PAMPAS: real-valued graphical models for computer vision
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Efficiently inducing features of conditional random fields
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
A probabilistic framework for learning kinematic models of articulated objects
Journal of Artificial Intelligence Research
Identifying significant edges in graphical models of molecular networks
Artificial Intelligence in Medicine
Hi-index | 0.00 |
We present two algorithms for learning the structure of a Markov network from data: GSMN* and GSIMN. Both algorithms use statistical independence tests to infer the structure by successively constraining the set of structures consistent with the results of these tests. Until very recently, algorithms for structure learning were based on maximum likelihood estimation, which has been proved to be NP-hard for Markov networks due to the difficulty of estimating the parameters of the network, needed for the computation of the data likelihood. The independence-based approach does not require the computation of the likelihood, and thus both GSMN* and GSIMN can compute the structure efficiently (as shown in our experiments). GSMN* is an adaptation of the Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearl's well-known properties of the conditional independence relation to infer novel independences from known ones, thus avoiding the performance of statistical tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle theorem, also introduced in this work, which is a simplified version of the set of Markov axioms. Experimental comparisons on artificial and real-world data sets show GSIMN can yield significant savings with respect to GSMN*, while generating a Markov network with comparable or in some cases improved quality. We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH, that produces all possible conditional independences resulting from repeatedly applying Pearl's theorems on the known conditional independence tests. The results of this comparison show that GSIMN, by the sole use of the Triangle theorem, is nearly optimal in terms of the set of independences tests that it infers.