Ray tracing to get 3D fixations on VOIs from portable eye tracker videos

  • Authors:
  • Susan M. Munn;Jeff B. Pelz

  • Affiliations:
  • Rochester Institute of Technology, Rochester, NY;Rochester Institute of Technology, Rochester, NY

  • Venue:
  • SIGGRAPH '09: Posters
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Our portable video-based monocular eye tracker contains a headgear with two cameras that capture videos of the observer's right eye and the scene from the observer's perspective (Figure 1a). With this eye tracker, we typically obtain a position -- that represents the observer's point of regard (POR) -- in each frame of the scene video (Figure 1b without bottom left box). These POR positions are in the image coordinate system of the scene camera, which moves with the observer's head. Therefore, these POR positions do not tell us where the person is looking in an exocentric reference frame. Currently, the videos are analyzed manually by examining each frame. In short, we aim to automatically determine how long the observer spends fixating specific objects in the scene and in what order these objects are fixated.