A model for reasoning about persistence and causation
Computational Intelligence
Probabilistic frame-based systems
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
A tutorial on learning with Bayesian networks
Learning in graphical models
Learning Bayesian networks with local structure
Learning in graphical models
Learning the Dimensionality of Hidden Variables
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Fusion of domain knowledge with data for structural learning in object oriented domains
The Journal of Machine Learning Research
Learning probabilistic relational models
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Probabilistic discovery of overlapping cellular processes and their regulation
RECOMB '04 Proceedings of the eighth annual international conference on Resaerch in computational molecular biology
CSB '04 Proceedings of the 2004 IEEE Computational Systems Bioinformatics Conference
Bayesian Network Learning with Parameter Constraints
The Journal of Machine Learning Research
A theoretical framework for learning Bayesian networks with parameter inequality constraints
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
A decomposition algorithm for learning Bayesian network structures from data
PAKDD'08 Proceedings of the 12th Pacific-Asia conference on Advances in knowledge discovery and data mining
Hi-index | 0.00 |
Methods for learning Bayesian networks can discover dependency structure between observed variables. Although these methods are useful in many applications, they run into computational and statistical problems in domains that involve a large number of variables. In this paper, we consider a solution that is applicable when many variables have similar behavior. We introduce a new class of models, module networks, that explicitly partition the variables into modules that share the same parents in the network and the same conditional probability distribution. We define the semantics of module networks, and describe an algorithm that learns the modules' composition and their dependency structure from data. Evaluation on real data in the domains of gene expression and the stock market shows that module networks generalize better than Bayesian networks, and that the learned module network structure reveals regularities that are obscured in learned Bayesian networks.