The EM algorithm for graphical association models with missing data
Computational Statistics & Data Analysis - Special issue dedicated to Toma´sˇ Havra´nek
Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
A tutorial on learning with Bayesian networks
Learning in graphical models
Inducing Probabilistic Grammars by Bayesian Model Merging
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
Hidden Markov Model} Induction by Bayesian Model Merging
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Refinement and coarsening of Bayesian networks
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Learning probabilistic relational models
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
The Bayesian structural EM algorithm
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Review: learning bayesian networks: Approaches and issues
The Knowledge Engineering Review
Latent variable discovery in classification models
Artificial Intelligence in Medicine
Clustering and categorization of Brazilian portuguese legal documents
PROPOR'12 Proceedings of the 10th international conference on Computational Processing of the Portuguese Language
A survey on latent tree models and applications
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
A serious problem in learning probabilistic models is the presence of hidden variables. These variables are not observed, yet interact with several of the observed variables. Detecting hidden variables poses two problems: determining the relations to other variables in the model and determining the number of states of the hidden variable. In this paper, we address the latter problem in the context of Bayesian networks. We describe an approach that utilizes a score-based agglomerative state-clustering. As we show, this approach allows us to efficiently evaluate models with a range of cardinalities for the hidden variable. We show how to extend this procedure to deal with multiple interacting hidden variables. We demonstrate the effectiveness of this approach by evaluating it on synthetic and real-life data. We show that our approach learns models with hidden variables that generalize better and have better structure than previous approaches.