Gaze communication using semantically consistent spaces

  • Authors:
  • Michael J. Taylor;Simon M. Rowe

  • Affiliations:
  • Canon Research Centre Europe, Guildford GU2 5YJ, UK;Canon Research Centre Europe, Guildford GU2 5YJ, UK

  • Venue:
  • Proceedings of the SIGCHI conference on Human Factors in Computing Systems
  • Year:
  • 2000

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a design for a user interface that supports improved gaze communication in multi-point video conferencing. We set out to use traditional computer displays to mediate the gaze of remote participants in a realistic manner. Previous approaches typically assume immersive displays, and use live video to animate avatars in a shared 3D virtual world. This shared world is then rendered from the viewpoint of the appropriate avatar to yield the required views of the virtual meeting. We show why such views of a shared space do not convey gaze information realistically when using traditional computer displays. We describe a new approach that uses a different arrangement of the avatars for each participant in order to preserve the semantic significance of gaze. We present a design process for arranging these avatars. Finally, we demonstrate the effectiveness of the new interface with experimental results.