Tunable-Q contourlet-based multi-sensor image fusion

  • Authors:
  • Haijiang Wang;Qinke Yang;Rui Li

  • Affiliations:
  • College of Urban and Environment Sciences, Shanxi Normal University, Linfen 041004, China and Institute of Soil and Water Conservation, Chinese Academy of Sciences and Ministry of Water Resources, ...;College of Urban and Environmental Sciences, Northwest University, Xi'an 710069, China;Institute of Soil and Water Conservation, Chinese Academy of Sciences and Ministry of Water Resources, Yangling 712100, China

  • Venue:
  • Signal Processing
  • Year:
  • 2013

Quantified Score

Hi-index 0.08

Visualization

Abstract

We propose a tunable-Q contourlet transform for multi-sensor texture-image fusion. The standard contourlet transform (CT) uses a multiscale pyramid to decompose an image into frequency channels that have the same bandwidth on a logarithmic scale. This low-Q decomposition scheme is not suitable for the representation of rich-texture images, in which there are numerous edges and thus rich intermediate- and high- frequency components in the frequency domain. By using a tunable decomposition parameter, the Q-factor of our tunable-Q CT can be efficiently tuned. With an acceptable redundancy, the tunable-Q CT is also anti-aliasing, and its basis is sharply localized in the desired area of the frequency domain. Experimental results show that image fusion based on the tunable-Q CT can not only reasonably preserve spectral information of multispectral images, but can also effectively extract texture details from high-resolution images. The proposed method easily outperforms fusion based on the nonsubsampled wavelet transform or on the nonsubsampled CT in both visual quality and objective evaluation.