Stochastic simulation
Global convergence and empirical consistency of the generalized Lloyd algorithm
IEEE Transactions on Information Theory
A Classification EM algorithm for clustering and two stochastic versions
Computational Statistics & Data Analysis - Special issue on optimization techniques in statistics
Fundamentals of speech recognition
Fundamentals of speech recognition
Statistical methods for speech recognition
Statistical methods for speech recognition
Hidden Markov Models for Speech Recognition
Hidden Markov Models for Speech Recognition
Fuzzy clustering algorithm for latent class model
Statistics and Computing
An empirical comparison of EM, SEM and MCMC performance for problematic Gaussian mixture likelihoods
Statistics and Computing
Improved statistical alignment models
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Backoff model training using partially observed data: application to dialog act tagging
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
IEEE Transactions on Information Theory
A Lagrangian formulation of Zador's entropy-constrained quantization theorem
IEEE Transactions on Information Theory
Convergence of the maximum a posteriori path estimator in hidden Markov models
IEEE Transactions on Information Theory
Properties of the maximum a posteriori path estimator in hidden Markov models
IEEE Transactions on Information Theory
IEEE Transactions on Image Processing
A constructive proof of the existence of Viterbi processes
IEEE Transactions on Information Theory
Hi-index | 0.06 |
Viterbi training (VT) provides a fast but inconsistent estimator of hidden Markov models (HMM). The inconsistency is alleviated with a little extra computation when we enable VT to asymptotically fix the true values of the parameters. This relies on infinite Viterbi alignments and associated with them limiting probability distributions. First in a sequel, this article is a proof of concept; it focuses on mixture models, an important but special case of HMM where the limiting distributions can be calculated exactly. A simulated Gaussian mixture shows that our central algorithm (VA1) can significantly improve the accuracy of VT with little extra cost. Next in the sequel, we present elsewhere a theory of the adjusted VT for the general HMMs, where the limiting distributions are more challenging to find. Here, we also present another, more advanced correction to VT and verify its fast convergence and high accuracy; its computational feasibility requires additional investigation.