Barrier coverage in camera sensor networks

  • Authors:
  • Yi Wang;Guohong Cao

  • Affiliations:
  • The Pennsylvania State University, University Park, PA;The Pennsylvania State University, University Park, PA

  • Venue:
  • MobiHoc '11 Proceedings of the Twelfth ACM International Symposium on Mobile Ad Hoc Networking and Computing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Barrier coverage has attracted much attention in the past few years. However, most of the previous works focused on traditional scalar sensors. We propose to study barrier coverage in camera sensor networks. One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object. As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed. To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered. We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used. We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters.