Synthesizing image representations of linguistic and topological features for predicting areas of attention

  • Authors:
  • Pascual Martínez-Gómez;Tadayoshi Hara;Chen Chen;Kyohei Tomita;Yoshinobu Kano;Akiko Aizawa

  • Affiliations:
  • National Institute of Informatics, Japan, The University of Tokyo, Japan;National Institute of Informatics, Japan;National Institute of Informatics, Japan, The University of Tokyo, Japan;National Institute of Informatics, Japan, The University of Tokyo, Japan;National Institute of Informatics, Japan,PRESTO, Japan Science and Technology Agency, Japan;National Institute of Informatics, Japan

  • Venue:
  • PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Depending on the reading objective or task, text portions with certain linguistic features require more user attention to maximize the level of understanding. The goal is to build a predictor of these text areas. Our strategy consists in synthesizing image representations of linguistic features, that allows us to use natural language processing techniques while preserving the topology of the text. Eye-tracking technology allows us to precisely observe the identity of fixated words on a screen and their fixation duration. Then, we estimate the scaling factors of a linear combination of image representations of linguistic features that best explain certain gaze evidence, which leads us to a quantification of the influence of linguistic features in reading behavior. Finally, we can compute saliency maps that contain a prediction of the most interesting or cognitive demanding areas along the text. We achieve an important prediction accuracy of the text areas that require more attention for users to maximize their understanding in certain reading tasks, suggesting that linguistic features are good signals for prediction.