HeadSPIN: a one-to-many 3D video teleconferencing system

  • Authors:
  • Andrew Jones;Magnus Lang;Graham Fyffe;Xueming Yu;Jay Busch;Ian McDowall;Mark Bolas;Paul Debevec

  • Affiliations:
  • University of Southern California;University of Southern California;University of Southern California;University of Southern California;University of Southern California;Fakespace Labs;University of Southern California;University of Southern California

  • Venue:
  • ACM SIGGRAPH 2009 Emerging Technologies
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

When people communicate in person, numerous cues of attention, eye contact, and gaze direction provide important additional channels of information, making in-person meetings more efficient and effective than telephone conversations and 2D teleconferences. Two-dimensional video teleconferencing precludes the impression of accurate eye contact: when a participant looks into the camera, everyone seeing their video stream sees the participant looking toward them; when the participant looks away from the camera (for example, toward other participants in the meeting), no one sees the participant looking at them. In this work, we develop a one-to-many teleconferencing system which uses 3D acquisition, transmission, and display technologies to achieve accurate reproduction of gaze and eye contact. In this system, the face of a single remote participant is scanned at interactive rates using structured light while the participant watches a large 2D screen showing an angularly correct view of the audience. The scanned participant's geometry is then shown on the 3D display to the audience.