Collaborative data compression using clustered source coding for wireless multimedia sensor networks

  • Authors:
  • Pu Wang;Rui Dai;Ian F. Akyildiz

  • Affiliations:
  • Broadband Wireless Networking Laboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia;Broadband Wireless Networking Laboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia;Broadband Wireless Networking Laboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia

  • Venue:
  • INFOCOM'10 Proceedings of the 29th conference on Information communications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Data redundancy caused by correlation has motivated the application of collaborative multimedia in-network processing for data filtering and compression in wireless multimedia sensor networks (WMSNs). This paper proposes an information theoretic data compression framework with an objective to maximize the overall compression of the visual information gathered in a WMSN. To achieve this, an entropy-based divergence measure (EDM) scheme is proposed to predict the compression efficiency of performing joint coding on the images collected by spatially correlated cameras. The novelty of EDM relies on its independence of the specific image types and coding algorithms, thereby providing a generic mechanism for prior evaluation of compression under different coding solutions. Utilizing the predicted results from EDM, a distributed multi-cluster coding protocol (DMCP) is proposed to construct a compression-oriented coding hierarchy. The DMCP aims to partition the entire network into a set of coding clusters such that the global coding gain is maximized. Moreover, in order to enhance decoding reliability at data sink, the DMCP also guarantees that each sensor camera is covered by at least two different coding clusters. Experiments on H.264 standards show that the proposed EDM can effectively predict the joint coding efficiency from multiple sources. Further simulations demonstrate that the proposed compression framework can reduce 10% - 23% total coding rate compared with the individual coding scheme, i.e., each camera sensor compresses its own image independently.