Neural Computation
Algebraic geometrical methods for hierarchical learning machines
Neural Networks
Algebraic Analysis for Nonidentifiable Learning Machines
Neural Computation
Asymptotic model selection for naive Bayesian networks
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
A decision-theoretic extension of stochastic complexity and its applications to learning
IEEE Transactions on Information Theory
Equations of states in singular statistical estimation
Neural Networks
Experimental study of ergodic learning curve in hidden Markov models
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
The Journal of Machine Learning Research
Hi-index | 0.01 |
Hidden Markov models are now used in many fields, for example, speech recognition, natural language processing, etc. However, the mathematical foundation of analysis for the models is not yet constructed, since the HMM is non-identifiable. In recent years, we have developed the algebraic geometrical method that allows us to analyze the non-regular and non-identifiable models. In this paper, we apply this method to the HMM and reveal the asymptotic stochastic complexity in a mathematically rigorous way. Our results show that the Bayesian estimation makes the generalization error small and that the well known BIC is different from the stochastic complexity.