A tutorial on hidden Markov models and selected applications in speech recognition
Readings in speech recognition
The Hierarchical Hidden Markov Model: Analysis and Applications
Machine Learning
Probabilistic top-down parsing and language modeling
Computational Linguistics
Probabilistic parsing and psychological plausibility
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Grammar, uncertainty and sentence processing
Grammar, uncertainty and sentence processing
A probabilistic earley parser as a psycholinguistic model
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Surprising parser actions and reading difficulty
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
Toward a psycholinguistically-motivated model of language processing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Positive results for parsing with a bounded stack using a model-based right-corner transform
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Broad-coverage parsing using human-like memory constraints
Computational Linguistics
HHMM parsing with limited parallelism
CMCL '10 Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics
Top-down recognizers for MCFGs and MGs
CMCL '11 Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics
Incremental, predictive parsing with psycholinguistically motivated tree-adjoining grammar
Computational Linguistics
Hi-index | 0.00 |
Hierarchical HMM (HHMM) parsers make promising cognitive models: while they use a bounded model of working memory and pursue incremental hypotheses in parallel, they still achieve parsing accuracies competitive with chart-based techniques. This paper aims to validate that a right-corner HHMM parser is also able to produce complexity metrics, which quantify a reader's incremental difficulty in understanding a sentence. Besides defining standard metrics in the HHMM framework, a new metric, embedding difference, is also proposed, which tests the hypothesis that HHMM store elements represents syntactic working memory. Results show that HHMM surprisal outperforms all other evaluated metrics in predicting reading times, and that embedding difference makes a significant, independent contribution.