Introduction to algorithms
Letter Recognition Using Holland-Style Adaptive Classifiers
Machine Learning
A tutorial on hidden Markov models and selected applications in speech recognition
Readings in speech recognition
The Harpy speech understanding system
Readings in speech recognition
Toward a real-time spoken language system using commercial hardware
HLT '90 Proceedings of the workshop on Speech and Natural Language
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Maximum Entropy Markov Models for Information Extraction and Segmentation
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Machine Learning for Sequential Data: A Review
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Fast inference and learning in large-state-space HMMs
ICML '05 Proceedings of the 22nd international conference on Machine learning
Algorithms for Chordal Analysis
Computer Music Journal
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
The Cognition of Basic Musical Structures
The Cognition of Basic Musical Structures
Incremental parsing with the perceptron algorithm
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
The Journal of Machine Learning Research
On learning linear ranking functions for beam search
Proceedings of the 24th international conference on Machine learning
Structured machine learning: the next ten years
Machine Learning
Speeding up HMM decoding and training by exploiting sequence repetitions
CPM'07 Proceedings of the 18th annual conference on Combinatorial Pattern Matching
Efficient staggered decoding for sequence labeling
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Iterative viterbi A* algorithm for k-best sequential decoding
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
The growth of information available to learning systems and the increasing complexity of learning tasks determine the need for devising algorithms that scale well with respect to all learning parameters. In the context of supervised sequential learning, the Viterbi algorithm plays a fundamental role, by allowing the evaluation of the best (most probable) sequence of labels with a time complexity linear in the number of time events, and quadratic in the number of labels. In this paper we propose CarpeDiem, a novel algorithm allowing the evaluation of the best possible sequence of labels with a sub-quadratic time complexity. We provide theoretical grounding together with solid empirical results supporting two chief facts. CarpeDiem always finds the optimal solution requiring, in most cases, only a small fraction of the time taken by the Viterbi algorithm; meantime, CarpeDiem is never asymptotically worse than the Viterbi algorithm, thus confirming it as a sound replacement.