Optical eye models for gaze tracking

  • Authors:
  • Jeffrey B. Mulligan

  • Affiliations:
  • NASA Ames Research Center

  • Venue:
  • Proceedings of the 2006 symposium on Eye tracking research & applications
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The traditional "bottom-up" approach to video gaze tracking consists of measuring image features, such as the position of the pupil, corneal reflex, limbus, etc. These measurements are mapped to gaze angles using coefficients obtained from calibration data, collected as a cooperative subject voluntarily fixates a series of known targets. This may be contrasted with a "top-down" approach in which the pose parameters of a model of the eye are adjusted in conjunction with a camera model to obtain a match to image data. One advantage of the model-based approach is provided by robustness to changes in geometry, in particular the disambiguation of translation and rotation. A second advantage is that the pose estimates obtained are in absolute angular units (e.g., degrees); traditional calibration serves only to determine the relation between the visual and optical axes, and provide a check for the model. While traditional grid calibration methods may not need to be applied, a set of views of the eye in a variety of poses is needed to determine the model parameters for an individual. When relative motion between the head and the camera is eliminated (as with a head-mounted camera), the model parameters can be determined from as few as two images. A single point calibration is required to determine the angular offset between the line-of-sight and the observed optical axis.