Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
TnT: a statistical part-of-speech tagger
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Shallow parsing with conditional random fields
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Large Margin Methods for Structured and Interdependent Output Variables
The Journal of Machine Learning Research
Fast inference and learning in large-state-space HMMs
ICML '05 Proceedings of the 22nd international conference on Machine learning
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Bidirectional inference with the easiest-first strategy for tagging sequence data
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Multilevel coarse-to-fine PCFG parsing
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Structure compilation: trading structure for features
Proceedings of the 25th international conference on Machine learning
Structured machine learning: the next ten years
Machine Learning
Extremely lexicalized models for accurate and fast HPSG parsing
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Efficient HPSG parsing with supertagging and CFG-filtering
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Efficient inference of CRFs for large-scale natural language data
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Phrase clustering for discriminative learning
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
CarpeDiem: Optimizing the Viterbi Algorithm and Applications to Supervised Sequential Learning
The Journal of Machine Learning Research
Efficient inference in large conditional random fields
ECML'06 Proceedings of the 17th European conference on Machine Learning
Speeding up HMM decoding and training by exploiting sequence repetitions
CPM'07 Proceedings of the 18th annual conference on Combinatorial Pattern Matching
Computational linguistics and natural language processing
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part I
Iterative viterbi A* algorithm for k-best sequential decoding
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
The Viterbi algorithm is the conventional decoding algorithm most widely adopted for sequence labeling. Viterbi decoding is, however, prohibitively slow when the label set is large, because its time complexity is quadratic in the number of labels. This paper proposes an exact decoding algorithm that overcomes this problem. A novel property of our algorithm is that it efficiently reduces the labels to be decoded, while still allowing us to check the optimality of the solution. Experiments on three tasks (POS tagging, joint POS tagging and chunking, and supertagging) show that the new algorithm is several orders of magnitude faster than the basic Viterbi and a state-of-the-art algorithm, CARPEDIEM (Esposito and Radicioni, 2009).