Learning to Parse Natural Language with Maximum Entropy Models
Machine Learning - Special issue on natural language learning
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Automatic compensation for parser figure-of-merit flaws
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Attention shifting for parsing speech
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Comparing and combining finite-state and context-free parsers
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Speeding up full syntactic parsing by leveraging partial parsing decisions
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Classifying chart cells for quadratic complexity context-free inference
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Improving the efficiency of a wide-coverage CCG parser
IWPT '07 Proceedings of the 10th International Conference on Parsing Technologies
Cube pruning as heuristic search
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Dynamic programming for linear-time incremental parsing
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Simulating morphological analyzers with stochastic taggers for confidence estimation
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Chart pruning for fast lexicalised-grammar parsing
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Beam-width prediction for efficient context-free parsing
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Efficient CCG parsing: A* versus adaptive supertagging
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Unary constraints for efficient context-free parsing
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
A fast, accurate, non-projective, semantically-enriched parser
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Efficient matrix-encoded grammars and low latency parallelization strategies for CYK
IWPT '11 Proceedings of the 12th International Conference on Parsing Technologies
Generalized higher-order dependency parsing with cube pruning
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Finite-state chart constraints for reduced complexity context-free parsing pipelines
Computational Linguistics
Hi-index | 0.00 |
In this paper, we extend methods from Roark and Hollingshead (2008) for reducing the worst-case complexity of a context-free parsing pipeline via hard constraints derived from finite-state tagging pre-processing. Methods from our previous paper achieved quadratic worst-case complexity. We prove here that alternate methods for choosing constraints can achieve either linear or O(Nlog2N) complexity. These worst-case bounds on processing are demonstrated to be achieved without reducing the parsing accuracy, in fact in some cases improving the accuracy. The new methods achieve observed performance comparable to the previously published quadratic complexity method. Finally, we demonstrate improved performance by combining complexity bounding methods with additional high precision constraints.