On the structure of hidden Markov models
Pattern Recognition Letters
Probabilistic Finite-State Machines-Part II
IEEE Transactions on Pattern Analysis and Machine Intelligence
Residual languages and probabilistic automata
ICALP'03 Proceedings of the 30th international conference on Automata, languages and programming
Using interesting sequences to interactively build Hidden Markov Models
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
This report studies when and why two Hidden Markov Models (HMMs) may represent the same stochastic process. HMMs are characterized in terms of equivalence classes whose elements represent identical stochastic processes. This characterization yields polynomial time algorithms to detect equivalent HMMs. We also find fast algorithms to reduce HMMs to essentially unique and minimal canonical representations. The reduction to a canonical form leads to the definition of "Generalized Markov Models" which are essentially HMMs without the positivity constraint on their parameters. We discuss how this generalization can yield more parsimonious representations of stochastic processes at the cost of the probabilistic interpretation of the model parameters.