Kinected conference: augmenting video imaging with calibrated depth and audio

  • Authors:
  • Anthony DeVincenzi;Lining Yao;Hiroshi Ishii;Ramesh Raskar

  • Affiliations:
  • MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA

  • Venue:
  • Proceedings of the ACM 2011 conference on Computer supported cooperative work
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The proliferation of broadband and high-speed Internet access has, in general, democratized the ability to commonly engage in videoconference. However, current video systems do not meet their full potential, as they are restricted to a simple display of unintelligent 2D pixels. In this paper we present a system for enhancing distance-based communication by augmenting the traditional video conferencing system with additional attributes beyond two-dimensional video. We explore how expanding a system's understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. We discuss specific scenarios that explore features such as synthetic refocusing, gesture activated privacy, and spatiotemporal graphic augmentation.