Lexical surprisal as a general predictor of reading time

  • Authors:
  • Irene Fernandez Monsalve;Stefan L. Frank;Gabriella Vigliocco

  • Affiliations:
  • University College London;University College London;University College London

  • Venue:
  • EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Probabilistic accounts of language processing can be psychologically tested by comparing word-reading times (RT) to the conditional word probabilities estimated by language models. Using surprisal as a linking function, a significant correlation between unlexicalized surprisal and RT has been reported (e.g., Demberg and Keller, 2008), but success using lexicalized models has been limited. In this study, phrase structure grammars and recurrent neural networks estimated both lexicalized and unlexicalized surprisal for words of independent sentences from narrative sources. These same sentences were used as stimuli in a self-paced reading experiment to obtain RTs. The results show that lexicalized surprisal according to both models is a significant predictor of RT, outperforming its un-lexicalized counterparts.