Fast context-free grammar parsing requires fast boolean matrix multiplication
Journal of the ACM (JACM)
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Ultraconservative online algorithms for multiclass problems
The Journal of Machine Learning Research
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Large Margin Methods for Structured and Interdependent Output Variables
The Journal of Machine Learning Research
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
New developments in parsing technology
Online large-margin training of dependency parsers
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Multilevel coarse-to-fine PCFG parsing
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
CoNLL-X shared task on multilingual dependency parsing
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
TAG, dynamic programming, and the perceptron for efficient, feature-rich parsing
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
Classifying chart cells for quadratic complexity context-free inference
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Hierarchical search for parsing
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Parsing with soft and hard constraints on dependency length
Parsing '05 Proceedings of the Ninth International Workshop on Parsing Technology
Efficient third-order dependency parsers
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Dynamic programming for linear-time incremental parsing
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Dual decomposition for parsing with non-projective head automata
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Fast and accurate arc filtering for dependency parsing
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Coarse-to-fine natural language processing
Coarse-to-fine natural language processing
Dynamic programming algorithms for transition-based dependency parsers
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Transition-based dependency parsing with rich non-local features
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
Generalized higher-order dependency parsing with cube pruning
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Parse, price and cut: delayed column and row generation for graph based parsers
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
Coarse-to-fine inference has been shown to be a robust approximate method for improving the efficiency of structured prediction models while preserving their accuracy. We propose a multi-pass coarse-to-fine architecture for dependency parsing using linear-time vine pruning and structured prediction cascades. Our first-, second-, and third-order models achieve accuracies comparable to those of their unpruned counterparts, while exploring only a fraction of the search space. We observe speed-ups of up to two orders of magnitude compared to exhaustive search. Our pruned third-order model is twice as fast as an unpruned first-order model and also compares favorably to a state-of-the-art transition-based parser for multiple languages.