Multimedia content creation using societal-scale ubiquitous camera networks and human-centric wearable sensing

  • Authors:
  • Mathew Laibowitz;Nan-wei Gong;Joseph A. Paradiso

  • Affiliations:
  • MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA

  • Venue:
  • Proceedings of the international conference on Multimedia
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

We present a novel approach to the creation of user-generated, documentary video using a distributed network of sensor-enabled video cameras and wearable on-body sensor devices. The wearable sensors are used to identify the subjects in view of the camera system and label the captured video with real-time human-centric social and physical behavioral information. With these labels, massive amounts of continually recorded video can be browsed, searched, and automatically stitched into cohesive multimedia content. This system enables naturally occurring human behavior to drive and control a multimedia content creation system in order to create video output that is understandable, informative, and/or enjoyable to its human audience. The collected sensor data is further utilized to enhance the created multimedia content such as by using the data to edit and/or generate audio score, determine appropriate pacing of edits, and control the length and type of audio and video transitions directly from the content of the captured media. We present the design of the platform, the design of the multimedia content creation application, and the evaluated results from several live runs of the complete system.