Coupled hidden Markov models for complex action recognition
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Cost-sensitive feature acquisition and classification
Pattern Recognition
CarpeDiem: an algorithm for the fast evaluation of SSL classifiers
Proceedings of the 24th international conference on Machine learning
The Journal of Machine Learning Research
SPIRAL: efficient and exact model identification for hidden Markov models
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
An Inductive Logic Programming Approach to Statistical Relational Learning
Proceedings of the 2005 conference on An Inductive Logic Programming Approach to Statistical Relational Learning
Revisiting output coding for sequential supervised learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Fast likelihood search for hidden Markov models
ACM Transactions on Knowledge Discovery from Data (TKDD)
Empirical Assessment of Two Strategies for Optimizing the Viterbi Algorithm
AI*IA '09: Proceedings of the XIth International Conference of the Italian Association for Artificial Intelligence Reggio Emilia on Emergent Perspectives in Artificial Intelligence
CarpeDiem: Optimizing the Viterbi Algorithm and Applications to Supervised Sequential Learning
The Journal of Machine Learning Research
Efficient staggered decoding for sequence labeling
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Efficient inference in large conditional random fields
ECML'06 Proceedings of the 17th European conference on Machine Learning
Iterative viterbi A* algorithm for k-best sequential decoding
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
For Hidden Markov Models (HMMs) with fully connected transition models, the three fundamental problems of evaluating the likelihood of an observation sequence, estimating an optimal state sequence for the observations, and learning the model parameters, all have quadratic time complexity in the number of states. We introduce a novel class of non-sparse Markov transition matrices called Dense-Mostly-Constant (DMC) transition matrices that allow us to derive new algorithms for solving the basic HMM problems in sub-quadratic time. We describe the DMC HMM model and algorithms and attempt to convey some intuition for their usage. Empirical results for these algorithms show dramatic speedups for all three problems. In terms of accuracy, the DMC model yields strong results and outperforms the baseline algorithms even in domains known to violate the DMC assumption.