Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments

  • Authors:
  • William Steptoe;Oyewole Oyekoya;Alessio Murgia;Robin Wolff;John Rae;Estefania Guimaraes;David Roberts;Anthony Steed

  • Affiliations:
  • Department of Computer Science, University College London W.Steptoe@cs.ucl.ac.uk;Department of Computer Science, University College London;Department of Computer Science, University of Reading;Centre for Virtual Environments, University of Salford;Department of Psychology, Roehampton University;Department of Psychology, Roehampton University;Centre for Virtual Environments, University of Salford;Department of Computer Science, University College London

  • Venue:
  • VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.