Adaptive Fusion of Multimodal Surveillance Image Sequences in Visual Sensor Networks

  • Authors:
  • D. Drajic;N. Cvejic

  • Affiliations:
  • Ericsson d.o.o., Belgrade;-

  • Venue:
  • IEEE Transactions on Consumer Electronics
  • Year:
  • 2007

Quantified Score

Hi-index 0.43

Visualization

Abstract

In this paper we present a novel method of fusing of the sequences of images obtained from multimodal surveillance cameras and subject to distortions typical for visual sensor networks environment. The proposed fusion method uses the structural similarity measure (SSIM) to measure a level of noise in regions of a received image in order to optimize the selection of regions in the fused image. The region-based image fusion algorithm using the dual-tree complex wavelet transform (DT-CWT) is used to fuse the selected regions. The performance of the proposed method was extensively tested for a number of multimodal surveillance image sequences and proposed method outperformed the state-of-the-art algorithms, increasing significantly the quality of the fused image, both visually and in terms of the Petrovic image fusion metric.