Foundations of statistical natural language processing
Foundations of statistical natural language processing
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
PCFG models of linguistic tree representations
Computational Linguistics
EACL '99 Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics
A new statistical parser based on bigram lexical dependencies
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
Efficient probabilistic top-down and left-corner parsing
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Surprising parser actions and reading difficulty
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
Lookahead in deterministic left-corner parsing
IncrementParsing '04 Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together
Robust models of human parsing
ROMAND '04 Proceedings of the 3rd Workshop on RObust Methods in Analysis of Natural Language Data
The influence of discourse on syntax a psycholinguistic model of sentence processing
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Complexity metrics in an incremental right-corner parser
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Cognitively plausible models of human language processing
ACLShort '10 Proceedings of the ACL 2010 Conference Short Papers
HHMM parsing with limited parallelism
CMCL '10 Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics
Hi-index | 0.00 |
Given the recent, evidence for probabilistic mechanisms in models of human ambiguity resolution, this paper investigates the plausibility of exploiting current wide-coverage, probabilistic parsing techniques to model human linguistic performance. In particular, we investigate the performance of standard stochastic parsers when they are revised to operate incrementally, and with reduced memory resources. We present techniques for ranking and filtering analyses, together with experimental results. Our results confirm that stochastic parsers which adhere to these psychologically motivated constraints achieve good performance. Memory can be reduced down to 1% (compared to exhausitve search) without reducing recall and precision. Additionally, these models exhibit substantially faster performance. Finally, we argue that this general result is likely to hold for more sophisticated, and psycholinguistically plausible, probabilistic parsing models.