Automatic analysis of 3D gaze coordinates on scene objects using data from eye-tracking and motion-capture systems

  • Authors:
  • Kai Essig;Daniel Dornbusch;Daniel Prinzhorn;Helge Ritter;Jonathan Maycock;Thomas Schack

  • Affiliations:
  • Bielefeld University, PB, Bielefeld, Germany;Bielefeld University, PB, Bielefeld, Germany;Bielefeld University, PB, Bielefeld, Germany;Bielefeld University, PB, Bielefeld, Germany;Bielefeld University, PB, Bielefeld, Germany;Bielefeld University, PB, Bielefeld, Germany

  • Venue:
  • Proceedings of the Symposium on Eye Tracking Research and Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We implemented a system, called the VICON-EyeTracking Visualizer, that combines mobile eye tracking data with motion capture data to calculate and visualize the 3D gaze vector within the motion capture co-ordinate system. To ensure that both devices were temporally synchronized we used previously developed software by us. By placing reflective markers on objects in the scene, their positions are known and by spatially synchronizing both the eye tracker and the motion capture system allows us to automatically compute how many times and where fixations occur, thus overcoming the time consuming and error-prone disadvantages of the traditional manual annotation process. We evaluated our approach by comparing its outcome for a simple looking task and a more complex grasping task against the average results produced by the manual annotation process. Preliminary data reveals that the program only differed from the average manual annotation results by approximately 3 percent in the looking task with regard to the number of fixations and cumulative fixation duration on each point in the scene. In case of the more complex grasping task the results depend on the object size: for larger objects there was good agreement (less than 16 percent (or 950ms)), but this degraded for smaller objects, where there are more saccades towards object boundaries. The advantages of our approach are easy user calibration, the ability to have unrestricted body movements (due to the mobile eye-tracking system), and that it can be used with any wearable eye tracker and marker based motion tracking system. Extending existing approaches, our system is also able to monitor fixations on moving objects. The automatic analysis of gaze and movement data in complex 3D scenes can be applied to a variety of research domains, i. e., Human Computer Interaction, Virtual Reality or grasping and gesture research.