Latent variable models and factors analysis
Latent variable models and factors analysis
Bayesian classification (AutoClass): theory and results
Advances in knowledge discovery and data mining
Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Hierarchical latent class models for cluster analysis
Eighteenth national conference on Artificial intelligence
Hierarchical Latent Class Models for Cluster Analysis
The Journal of Machine Learning Research
Asymptotic Model Selection for Naive Bayesian Networks
The Journal of Machine Learning Research
Dimension correction for hierarchical latent class models
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
On the geometry of Bayesian graphical models with hidden variables
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Asymptotic model selection for directed networks with hidden variables*
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Automated analytic asymptotic evaluation of the marginal likelihood for latent models
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Latent tree models and diagnosis in traditional Chinese medicine
Artificial Intelligence in Medicine
Effective dimensions of partially observed polytrees
International Journal of Approximate Reasoning
A survey on latent tree models and applications
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Hierarchical latent class (HLC) models are tree-structured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a reasonable criterion to use in practice for learning HLC models. Empirical studies also suggest that sometimes model selection can be improved if standard model dimension is replaced with effective model dimension in the penalty term of the BIC score. Effective dimensions are difficult to compute. In this paper, we prove a theorem that relates the effective dimension of an HLC model to the effective dimensions of a number of latent class models. The theorem makes it computationally feasible to compute the effective dimensions of large HLC models. The theorem can also be used to compute the effective dimensions of general tree models.