Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together

  • Authors:
  • David Roberts;Robin Wolff;John Rae;Anthony Steed;Rob Aspin;Moira McIntyre;Adriana Pena;Oyewole Oyekoya;Will Steptoe

  • Affiliations:
  • University of Salford, d.j.roberts@salford.ac.uk;University of Salford;University of Roehampton;University College London;University of Salford;University of Salford;University of Salford;University College London;University College London

  • Venue:
  • VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.