A Theory for Multiresolution Signal Decomposition: The Wavelet Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multisensor image fusion using the wavelet transform
Graphical Models and Image Processing
Mathematical Techniques in Multisensor Data Fusion
Mathematical Techniques in Multisensor Data Fusion
Multifocus image fusion using artificial neural networks
Pattern Recognition Letters
Image fusion using steerable dyadic wavelet transform
ICIP '95 Proceedings of the 1995 International Conference on Image Processing (Vol. 3)-Volume 3 - Volume 3
Image Fusion Algorithm Using RBF Neural Networks
HAIS '08 Proceedings of the 3rd international workshop on Hybrid Artificial Intelligence Systems
Hi-index | 0.00 |
A new method is proposed for merging two spatially registered images with diverse focus in this paper. It is based on multi-resolution wavelet decomposition, Self-Organizing Feature Map (SOFM) neural networks and evolution strategies (ES). A normalized feature image, which represents the local region clarity difference of the corresponding spatial location of two source images, is extracted by wavelet transform without down-sampling. The feature image is clustered by SOFM learning algorithm and every pixel pair in source images is classified into a certain class which indicates different clarity differences. To each pixel pairs in different classes, we use different fusion factors to merge themrespectively, these fusion factors are determined by evolution strategies to achieve the best fusion performance. Experimental results show that the proposed method outperforms the wavelet transform (WT) method.