Image registration for text-gaze alignment

  • Authors:
  • Pascual Martinez-Gomez;Chen Chen;Tadayoshi Hara;Yoshinobu Kano;Akiko Aizawa

  • Affiliations:
  • The University of Tokyo, Tokyo, Tokyo, Japan;The University of Tokyo, Tokyo, Tokyo, Japan;National Institute of Informatics, Tokyo, Tokyo, Japan;PRESTO, Japan Science and Technology Agency, Tokyo, Tokyo, Japan;National Institute of Informatics, Tokyo, Tokyo, Japan

  • Venue:
  • Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Applications using eye-tracking devices need a higher accuracy in recognition when the task reaches a certain complexity. Thus, more sophisticated methods to correct eye-tracking measurement errors are necessary to lower the penetration barrier of eye-trackers in unconstrained tasks. We propose to take advantage of the content or the structure of textual information displayed on the screen to build informed error-correction algorithms that generalize well. The idea is to use feature-based image registration techniques to perform a linear transformation of gaze coordinates to find a good alignment with text printed on the screen. In order to estimate the parameters of the linear transformation, three optimization strategies are proposed to avoid the problem of local minima, namely Monte Carlo, multi-resolution and multi-blur optimization. Experimental results show that a more precise alignment of gaze data with words on the screen can be achieved by using these methods, allowing a more reliable use of eye-trackers in complex and unconstrained tasks.