Supervised learning of hidden Markov models for sequence discrimination
RECOMB '97 Proceedings of the first annual international conference on Computational molecular biology
Neural Computation
Training Hidden Markov Models with Multiple Observations-A Combinatorial Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Learnability of Hidden Markov Models
ICGI '02 Proceedings of the 6th International Colloquium on Grammatical Inference: Algorithms and Applications
On-Line Estimation of Hidden Markov Model Parameters
DS '00 Proceedings of the Third International Conference on Discovery Science
Simplified Training Algorithms for Hierarchical Hidden Markov Models
DS '01 Proceedings of the 4th International Conference on Discovery Science
ICCMSE '03 Proceedings of the international conference on Computational methods in sciences and engineering
Hybrid modeling, hmm/nn architectures, and protein applications
Neural Computation
A comparison of techniques for on-line incremental learning of HMM parameters in anomaly detection
CISDA'09 Proceedings of the Second IEEE international conference on Computational intelligence for security and defense applications
An unsupervised approach for linking automatically extracted and manually crafted LTAGs
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part I
Adaptive ROC-based ensembles of HMMs applied to anomaly detection
Pattern Recognition
ACO-based BW algorithm for parameter estimation of hidden Markov models
International Journal of Computer Applications in Technology
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
Self-Organizing Hidden Markov Model Map (SOHMMM)
Neural Networks
Hi-index | 0.00 |
A simple learning algorithm for Hidden Markov Models (HMMs) ispresented together with a number of variations. Unlike otherclassical algorithms such as the Baum-Welch algorithm, thealgorithms described are smooth and can be used on-line (after eachexample presentation) or in batch mode, with or without the usualViterbi most likely path approximation. The algorithms have simpleexpressions that result from using a normalized-exponentialrepresentation for the HMM parameters. All the algorithms presentedare proved to be exact or approximate gradient optimizationalgorithms with respect to likelihood, log-likelihood, orcross-entropy functions, and as such are usually convergent. Thesealgorithms can also be casted in the more general EM(Expectation-Maximization) framework where they can be viewed asexact or approximate GEM (Generalized Expectation-Maximization)algorithms. The mathematical properties of the algorithms arederived in the appendix.