Image fusion based on a new contourlet packet

  • Authors:
  • Shuyuan Yang;Min Wang;Licheng Jiao;Ruixia Wu;Zhaoxia Wang

  • Affiliations:
  • Department of Electrical Engineering, Institute of Intelligent Information Processing, Xidian University, Xi'an 710071, China;Department of Electrical Engineering, National Key Lab. of Radar Signal Processing, Xidian University, Xi'an 710071, China;Department of Electrical Engineering, National Key Lab. of Radar Signal Processing, Xidian University, Xi'an 710071, China;Department of Electrical Engineering, Institute of Intelligent Information Processing, Xidian University, Xi'an 710071, China;Department of Electrical Engineering, Institute of Intelligent Information Processing, Xidian University, Xi'an 710071, China

  • Venue:
  • Information Fusion
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Contourlet is a ''true'' two-dimensional transform that can capture the intrinsic geometrical structure and has been applied to many tasks in image processing. In this paper, a new contourlet packet (CP) is constructed based on a complete wavelet quadtree followed by a nonsubsampled directional filter bank (NSDFB). By combing the finer approximation characteristic of wavelet packet (WP) with the invertible characteristic of NSDFB, the proposed CP has more accurate reconstruction of images than WP. Moreover, the wavelet quadtree decomposition is implemented by the stationary wavelet transform (SWT), so the CP proves to be characteristic of shift-invariant and linear phase by choosing appropriate filters. After the proposed CP transform on the fusing images, a pulse coupled neural network (PCNN) is used to make a fusion decision, which can obtain better visual result for the global features of the original images being extracted by the output pulses of the PCNN neurons. We compare the performance of our proposed method in image fusion with that of wavelet, contourlet, wavelet packet and other contourlet packet based approaches. The experiment results show the superiorities of the method to its counterparts in image clarity and some numerical guidelines.