Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Probabilistic parsing and psychological plausibility
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Left-corner parsing and psychological plausibility
COLING '92 Proceedings of the 14th conference on Computational linguistics - Volume 1
A probabilistic earley parser as a psycholinguistic model
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Accurate unlexicalized parsing
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Surprising parser actions and reading difficulty
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
Toward a psycholinguistically-motivated model of language processing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Lookahead in deterministic left-corner parsing
IncrementParsing '04 Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together
Positive results for parsing with a bounded stack using a model-based right-corner transform
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Broad-coverage parsing using human-like memory constraints
Computational Linguistics
Complexity metrics in an incremental right-corner parser
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Hi-index | 0.00 |
Hierarchical Hidden Markov Model (HHMM) parsers have been proposed as psycholinguistic models due to their broad coverage within human-like working memory limits (Schuler et al., 2008) and ability to model human reading time behavior according to various complexity metrics (Wu et al., 2010). But HHMMs have been evaluated previously only with very wide beams of several thousand parallel hypotheses, weakening claims to the model's efficiency and psychological relevance. This paper examines the effects of varying beam width on parsing accuracy and speed in this model, showing that parsing accuracy degrades gracefully as beam width decreases dramatically (to 2% of the width used to achieve previous top results), without sacrificing gains over a baseline CKY parser.