Multisensor video fusion based on spatial-temporal salience detection

  • Authors:
  • Qiang Zhang;Yueling Chen;Long Wang

  • Affiliations:
  • -;-;-

  • Venue:
  • Signal Processing
  • Year:
  • 2013

Quantified Score

Hi-index 0.08

Visualization

Abstract

With three dimensional uniform discrete curvelet transform (3D-UDCT) and spatial-temporal structure tensor, a novel video fusion algorithm for videos with static background images is proposed in this paper. Firstly, the 3D-UDCT is employed to decompose source videos into many subbands with different scales and directions. Secondly, corresponding subbands of source videos are merged with different fusion schemes. Finally, the fused video is obtained by the reverse 3D-UDCT. Especially, when bandpass directional subband coefficients are merged, a spatial-temporal salience detection algorithm based on the structure tensor is performed. And each subband is divided into three types of regions, i.e., regions with temporal moving targets, regions with spatial features of background images, and smooth regions. Then different fusion rules are designed for each type of regions. Compared with some existing fusion methods, the proposed fusion algorithm can not only extract more spatial-temporal salient features from input videos but also perform better in spatial-temporal consistency. In addition, the proposed fusion algorithm can also be extended to fuse videos with dynamic background images by a simple modification. Several sets of experimental results demonstrate the feasibility and validity of the proposed fusion method.