Augmenting Film and Video Footage with Sensor Data

  • Authors:
  • Norman Makoto Su;Heemin Park;Eric Bostrom;Jeff Burke;Mani B. Srivastava;Deborah Estrin

  • Affiliations:
  • -;-;-;-;-;-

  • Venue:
  • PERCOM '04 Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom'04)
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the advent of tiny networked devices, MarkWeiser's vision of a world embedded with invisible computersis coming to age. Due to their small size and relative ease ofdeployment, sensor networks have been utilized by zoologists,seismologists and military personnel. In this paper, we investigatethe novel application of sensor networks to the film industry. Inparticular, we are interested in augmenting film and video footagewith sensor data. Unobtrusive sensors are deployed on a film setor in a television studio and on performers. During a filming of ascene, sensor data such as light intensity, color temperature andlocation are collected and synchronized with each film or videoframe. Later, editors, graphics artists and programmers can viewthis data in synchronization with film and video playback. Forexample, such data can help define a new level of seamless integrationbetween computer graphics and real world photography.A real-time version of our system would allow sensor data totrigger camera movement and cue special effects. In this paper,we discuss the design and implementation of the first part of ourembedded film set environment, the augmented recording system.Augmented recording is a foundational component for the UCLAHypermedia Studio's research into the use of sensor networks infilm and video production. In addition, we have evaluated oursystem in a television studio.