Data-Centric Visual Sensor Networks for 3D Sensing

  • Authors:
  • Mert Akdere;Uğur Çetintemel;Daniel Crispell;John Jannotti;Jie Mao;Gabriel Taubin

  • Affiliations:
  • Brown University, Providence, USA RI 02912;Brown University, Providence, USA RI 02912;Brown University, Providence, USA RI 02912;Brown University, Providence, USA RI 02912;Brown University, Providence, USA RI 02912;Brown University, Providence, USA RI 02912

  • Venue:
  • GeoSensor Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual Sensor Networks (VSNs) represent a qualitative leap in functionality over existing sensornets. With high data rates and precise calibration requirements, VSNs present challenges not faced by today's sensornets. The power and bandwidth required to transmit video data from hundreds or thousands of cameras to a central location for processing would be enormous.A network of smart cameras should process video data in real time, extracting features and three-dimensional geometry from the raw images of cooperating cameras. These results should be stored and processed in the network, near their origin. New content-routing techniques can allow cameras to find common features--critical for calibration, search, and tracking. We describe a novel query mechanism to mediate access to this distributed datastore, allowing high-level features to be described as compositions in space-time of simpler features.