Geometrically correct imagery for teleconferencing

  • Authors:
  • Ruigang Yang;Michael S. Brown;W. Brent Seales;Henry Fuchs

  • Affiliations:
  • Department of Computer Science, University of North Carolina at Chapel Hill;Department of Computer Science, University of North Carolina at Chapel Hill;Department of Computer Science, University of North Carolina at Chapel Hill and visiting Research Associate Professor from the University of Kentucky;Department of Computer Science, University of North Carolina at Chapel Hill

  • Venue:
  • MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current camera-monitor teleconferencing applications produce unrealistic imagery and break any sense of presence for the participants. Other capture/display technologies can be used to provide more compelling teleconferencing. However, complex geometries in capture/display systems make producing geometrically correct imagery difficult. It is usually impractical to detect, model and compensate for all effects introduced by the capture/display system. Most applications simply ignore these issues and rely on the user acceptance of the camera-monitor paradigm.This paper presents a new and simple technique for producing geometrically correct imagery for teleconferencing environments. The necessary image transformations are derived by finding a mapping between a capture and display device for a fixed viewer location. The capture/display relationship is computed directly in device coordinates and completely avoids the need for any intermediate, complex representations of screen geometry, capture and display distortions, and viewer location. We describe our approach and demonstrate it via several prototype implementations that operate in real-time and provide a substantially more compelling sense of presence than the standard teleconferencing paradigm.