SemantiCode: using content similarity and database-driven matching to code wearable eyetracker gaze data

  • Authors:
  • Daniel F. Pontillo;Thomas B. Kinsman;Jeff B. Pelz

  • Affiliations:
  • Rochester Institute of Technology;Rochester Institute of Technology;Rochester Institute of Technology

  • Venue:
  • Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking. We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.