Multi-resolution image data fusion using 2-D discrete wavelet transform and self-organizing neural networks

  • Authors:
  • Q. P. Zhang;M. Liang;W. C. Sun

  • Affiliations:
  • Fudan University, Shanghai, China;Fudan University, Shanghai, China;Fudan University, Shanghai, China

  • Venue:
  • VRCAI '04 Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years, many solutions to multi-resolution image data fusion have been proposed; however, it is difficult to simulate the human ability of image fusion when algorithms of image processing are piled up merely. On the basis of the review of researches on psychophysics and physiology of human vision, this paper presents an effective multi-resolution image data fusion methodology, which is based on discrete wavelet transform theory and self-organizing neural network, to simulate the processes of images recognition and understanding implemented in the human vision system. Through the two-dimensional wavelet transform, original images can be decomposed in to different types of details and levels. The integration rule can be built using self-organizing neural networks, just like the automatic work in human brain. As an example, the model is applied to images obtained by Cyclone Center Locating Satellite System (CCLSS). The effectiveness of the proposed model is demonstrated via results comparison with several other image fusion methods.