Learning to Parse Natural Language with Maximum Entropy Models
Machine Learning - Special issue on natural language learning
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
Introduction to Algorithms
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Probabilistic top-down parsing and language modeling
Computational Linguistics
New figures of merit for best-first probabilistic chart parsing
Computational Linguistics
A parsing: fast exact Viterbi parse selection
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Multilevel coarse-to-fine PCFG parsing
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Better binarization for the CKY parsing
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Improving the efficiency of a wide-coverage CCG parser
IWPT '07 Proceedings of the 10th International Conference on Parsing Technologies
Probabilistic models for disambiguation of an HPSG-based chart generator
Parsing '05 Proceedings of the Ninth International Workshop on Parsing Technology
Parsing '05 Proceedings of the Ninth International Workshop on Parsing Technology
Better synchronous binarization for machine translation
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Computational linguistics and natural language processing
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part I
Hi-index | 0.00 |
This paper presents an iterative CKY parsing algorithm for probabilistic context-free grammars (PCFG). This algorithm enables us to prune unnecessary edges produced during parsing, which results in more efficient parsing. Since pruning is done by using the edge’s inside Viterbi probability and the upper-bound of the outside Viterbi probability, this algorithm guarantees to output the exact Viterbi parse, unlike beam-search or best-first strategies. Experimental results using the Penn Treebank II corpus show that the iterative CKY achieved more than 60% reduction of edges compared with the conventional CKY algorithm and the run-time overhead is very small. Our algorithm is general enough to incorporate a more sophisticated estimation function, which should lead to more efficient parsing.