Smooth on-line learning algorithms for hidden Markov models
Neural Computation
An efficient probabilistic context-free parsing algorithm that computes prefix probabilities
Computational Linguistics
Factorial Hidden Markov Models
Machine Learning - Special issue on learning with probabilistic representations
The Hierarchical Hidden Markov Model: Analysis and Applications
Machine Learning
Maximum Entropy Markov Models for Information Extraction and Segmentation
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.00 |
We present a simplified EM algorithm and an approximate algorithm for training hierarchical hidden Markov models (HHMMs), an extension of hidden Markov models. The EM algorithm we present is proved to increase the likelihood of training sentences at each iteration unlike the existing algorithm called the generalized Baum-Welch algorithm. The approximate algorithm is applicable to tasks like robot navigation in which we observe sentences and train parameters simultaneously. These algorithms and their derivations are simplified by making use of stochastic context-free grammars.