Fundamentals of speech recognition
Fundamentals of speech recognition
Smooth on-line learning algorithms for hidden Markov models
Neural Computation
A Fast Statistical Mixture Algorithm for On-Line Handwriting Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
An HMM/MLP architecture for sequence recognition
Neural Computation
HMM Based On-Line Handwriting Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Speech Recognition: The Development of the Sphinx Recognition System
Automatic Speech Recognition: The Development of the Sphinx Recognition System
A New Algorithm for Automatic Configuration of Hidden Markov Models
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
Probabilistic Modeling and Recognition of 3-D Objects
International Journal of Computer Vision
Rhythmic-motion synthesis based on motion-beat analysis
ACM SIGGRAPH 2003 Papers
Human activity recognition for automatic visual surveillance of wide areas
Proceedings of the ACM 2nd international workshop on Video surveillance & sensor networks
Offline Geometric Parameters for Automatic Signature Verification Using Fixed-Point Arithmetic
IEEE Transactions on Pattern Analysis and Machine Intelligence
An HMM for detecting spam mail
Expert Systems with Applications: An International Journal
A computer-aided MFCC-based HMM system for automatic auscultation
Computers in Biology and Medicine
Incremental estimation of discrete hidden Markov models based on a new backward procedure
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Review: The use of pervasive sensing for behaviour profiling - a survey
Pervasive and Mobile Computing
A Chinese sign language recognition system based on SOFM/SRN/HMM
Pattern Recognition
KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence
Human activity recognition in archaeological sites by hidden markov models
PCM'04 Proceedings of the 5th Pacific Rim Conference on Advances in Multimedia Information Processing - Volume Part II
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
Survey on classifying human actions through visual sensors
Artificial Intelligence Review
Proceedings of the ACM Symposium on Applied Perception
Sign language recognition using kinect
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part I
Generation of IFS fractal images based on hidden markov model
Transactions on Edutainment VIII
Proceedings of the ACM Symposium on Applied Perception
Hi-index | 0.15 |
Hidden Markov models (HMMs) are stochastic models capable of statistical learning and classification. They have been applied in speech recognition and handwriting recognition because of their great adaptability and versatility in handling sequential signals. On the other hand, as these models have a complex structure and also because the involved data sets usually contain uncertainty, it is difficult to analyze the multiple observation training problem without certain assumptions. For many years researchers have used Levinson's training equations in speech and handwriting applications, simply assuming that all observations are independent of each other. This paper presents a formal treatment of HMM multiple observation training without imposing the above assumption. In this treatment, the multiple observation probability is expressed as a combination of individual observation probabilities without losing generality. This combinatorial method gives one more freedom in making different dependence-independence assumptions. By generalizing Baum's auxiliary function into this framework and building up an associated objective function using the Lagrange multiplier method, it is proven that the derived training equations guarantee the maximization of the objective function. Furthermore, we show that Levinson's training equations can be easily derived as a special case in this treatment.