Energy-efficient cooperative image processing in video sensor networks

  • Authors:
  • Dan Tao;Huadong Ma;Yonghe Liu

  • Affiliations:
  • Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, School of Computer Science & Technology, Beijing University of Posts and Telecommunications, Beijing, China;Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, School of Computer Science & Technology, Beijing University of Posts and Telecommunications, Beijing, China;Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX

  • Venue:
  • PCM'05 Proceedings of the 6th Pacific-Rim conference on Advances in Multimedia Information Processing - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Different from conventional sensor networks, video sensor networks are distinctly characterized by their immense information and directional sensing models. In this paper, we propose an innovative, systematic method for image processing in video sensor networks in order to reduce the workload for individual sensors. Given the severe resource constraints on individual sensor nodes, our approach is to employ the redundancy among sensor nodes by partitioning the sensing task among highly correlated sensors. For an object of interest, each sensor only needs to capture and deliver a fraction of the scene of interests and these partial images can be fused at the sink to reconstruct a composite image. In particular, we detail how the sensing task can be partitioned among the sensors and propose an image fusion algorithms based on epipolar line constraint to fuse the received partial images at the sink. The experimental results show that our approach can achieve satisfactory results and we give detailed discussions on the effects of different system parameters.