Inference in Hidden Markov Models (Springer Series in Statistics)
Inference in Hidden Markov Models (Springer Series in Statistics)
Probability in the Engineering and Informational Sciences
IEEE Transactions on Information Theory
Convergence of the maximum a posteriori path estimator in hidden Markov models
IEEE Transactions on Information Theory
Properties of the maximum a posteriori path estimator in hidden Markov models
IEEE Transactions on Information Theory
Hi-index | 754.84 |
Since the early days of digital communication, hidden Markov models (HMMs) have now been also routinely used in speech recognition, processing of natural languages, images, and in bioinformatics. In an HMM (Xt, Yt)t ≥ 1, observations X1, X2,... are assumed to be conditionally independent given a Markov process Y1, Y2,..., which itself is not observed; moreover, the conditional distribution of Xt depends solely on Yt. Central to the theory and applications of HMM is the Viterbi algorithm to find a maximum a posteriori probability (MAP) estimate v(x1:T) = (v1, v2..., vT) of Y1:T given observed data x1:T. Maximum a posteriori paths are also known as the Viterbi paths, or alignments. Recently, attempts have been made to study behavior of the Viterbi alignments when T → ∞. Thus, it has been shown that in some cases a well-defined limiting Viterbi alignment exists. While innovative, these attempts have relied on rather strong assumptions and involved proofs which are existential. This work proves the existence of infinite Viterbi alignments in a more constructive manner and for a very general class of HMMs.