Multi-mode saliency dynamics model for analyzing gaze and attention

  • Authors:
  • Ryo Yonetani;Hiroaki Kawashima;Takashi Matsuyama

  • Affiliations:
  • Kyoto University;Kyoto University;Kyoto University

  • Venue:
  • Proceedings of the Symposium on Eye Tracking Research and Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a method to analyze a relationship between eye movements and saliency dynamics in videos for estimating attentive states of users while they watch the videos. The multi-mode saliency-dynamics model (MMSDM) is introduced to segment spatio-temporal patterns of the saliency dynamics into multiple sequences of primitive modes underlying the saliency patterns. The MMSDM enables us to describe the relationship by the local saliency dynamics around gaze points, which is modeled by a set of distances between gaze points and salient regions characterized by the extracted modes. Experimental results show the effectiveness of the proposed model to classify the attentive states of users by learning the statistical difference of the local saliency dynamics on gaze-paths at each level of attentiveness.