Multisensor image fusion using the wavelet transform
Graphical Models and Image Processing
Image fusion based on median filters and SOFM neural networks: a three-step scheme
Signal Processing - Special section on information theoretic aspects of digital watermarking
A Common Architecture For The DWT and IDWT
ASAP '96 Proceedings of the IEEE International Conference on Application-Specific Systems, Architectures, and Processors
Hi-index | 0.00 |
In recent years, many solutions to multi-resolution image data fusion have been proposed; however, it is difficult to simulate the human ability of image fusion when algorithms of image processing are piled up merely. On the basis of the review of researches on psychophysics and physiology of human vision, this paper presents an effective multi-resolution image data fusion methodology, which is based on discrete wavelet transform theory and self-organizing neural network, to simulate the processes of images recognition and understanding implemented in the human vision system. Through the two-dimensional wavelet transform, original images can be decomposed in to different types of details and levels. The integration rule can be built using self-organizing neural networks, just like the automatic work in human brain. As an example, the model is applied to images obtained by Cyclone Center Locating Satellite System (CCLSS). The effectiveness of the proposed model is demonstrated via results comparison with several other image fusion methods.