Compensating for eye tracker camera movement

  • Authors:
  • Susan M. Kolakowski;Jeff B. Pelz

  • Affiliations:
  • Rochester Institute of Technology, Rochester, NY;Rochester Institute of Technology, Rochester, NY

  • Venue:
  • Proceedings of the 2006 symposium on Eye tracking research & applications
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

An algorithm was developed to improve prediction of eye position from video-based eye tracker data. Eye trackers that determine eye position relying on images of pupil and corneal reflection positions typically make poor differentiation between changes in eye position and movements of the camera relative to the subject's head. The common method employed by video-based eye trackers to determine eye position involves calculation of the vector difference between the center of the pupil and the center of the corneal reflection under the assumption that the centers of the pupil and the corneal reflection change in unison when the camera moves with respect to the head. This assumption was tested and is shown to increase prediction error. Also, predicting the corneal reflection center is inherently less precise than that of the pupil due to the reflection's small size. Typical approaches thus generate eye positions that can only be as robust as the relatively noisy corneal reflection data. An algorithm has been developed to more effectively account for camera movements with respect to the head as well as reduce the noise in the final eye position prediction. This algorithm was tested and is shown to be particularly robust during the common situation when sharp eye movements occur intermixed with smooth head-to-camera changes.