Self-actuation of camera sensors for redundant data elimination in wireless multimedia sensor networks

  • Authors:
  • Andrew Newell;Kemal Akkaya

  • Affiliations:
  • Department of Computer Science, Southern Illinois University Carbondale, Carbondale, IL;Department of Computer Science, Southern Illinois University Carbondale, Carbondale, IL

  • Venue:
  • ICC'09 Proceedings of the 2009 IEEE international conference on Communications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the increasing interest in the deployment of wireless multimedia sensor networks (WMSNs), new challenges arouse with effective use of camera sensors to provide maximized event coverage with the least amount of redundancy in the collected multimedia data. Given that the processing and transmission of multimedia data are costly in terms of energy, camera sensors should only be actuated when an event is detected within their vicinity. While achieving maximum coverage with such actuation is desirable, multiple camera sensors' field-of-view (FoV) can be covering the same spots and thus redundant multimedia data can unnecessarily be sent to the base-station. In this paper, assuming camera sensors with fixed orientation, we propose a low-cost distributed actuation scheme which strives to turn on the least number of camera sensors to avoid possible redundancy in the multimedia data while still providing the necessary event coverage. The basic idea of this distributed scheme is the collaboration of camera sensors that have heard from scalar sensors about an occurring event in order to minimize the possible coverage overlaps among their FoVs. The scheme requires only 1-hop information for camera sensors and its messaging overhead is negligible. Through simulation, we show how the distributed scheme performs with respect to the cases when all the cameras within the vicinity or the region are actuated and assess the performance under various conditions.