FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing

  • Authors:
  • Lining Yao;Anthony DeVincenzi;Anna Pereira;Hiroshi Ishii

  • Affiliations:
  • Massachusetts Institute of Technology, Cambridge, USA;Massachusetts Institute of Technology, Cambridge, USA;Massachusetts Institute of Technology, Cambridge, USA;Massachusetts Institute of Technology, Cambridge, USA

  • Venue:
  • Proceedings of the 1st symposium on Spatial user interaction
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.