Interactive 3D teleconferencing with user-adaptive views

  • Authors:
  • Ruigang Yang;Andrew Nashel;Herman Towles

  • Affiliations:
  • University of Kentucky, Lexington, KY;University of North Carolina at Chapel Hill, Chapel Hill, NC;University of North Carolina at Chapel Hill, Chapel Hill, NC

  • Venue:
  • Proceedings of the 2004 ACM SIGMM workshop on Effective telepresence
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a system and techniques for synthesizing views for three dimensional video teleconferencing. Instead of performing complex 3D scene acquisition, we decided to trade storage/hardware for computation, i.e., using more cameras. While it is expensive to directly capture a scene from all possible viewpoints, we observed that the participants' viewpoints usually remain at a constant height (eye level) during video teleconferencing. Therefore we can restrict the possible viewpoint to be within a virtual plane without sacrificing much of the realism. Doing so signicantly reduces the number of cameras required. We demonstrate a realtime system that uses a linear array of cameras to perform Light-Field style rendering. The simplicity and robustness of light fielding rendering, combined with the natural restrictions of limited view volume in video teleconferencing, allow us to synthesize photo-realistic views persuser request at interactive rate.