Multi-focus image fusion based on SOFM neural networks and evolution strategies

  • Authors:
  • Yan Wu;Chongyang Liu;Guisheng Liao

  • Affiliations:
  • National Key Laboratory of Radar Signal Processing, Xidian University, Xi'an, Shaanxi, P.R. China;School of Electronics Engineering, Xidian University, Xi'an, Shaanxi, P.R. China;National Key Laboratory of Radar Signal Processing, Xidian University, Xi'an, Shaanxi, P.R. China

  • Venue:
  • ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part III
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new method is proposed for merging two spatially registered images with diverse focus in this paper. It is based on multi-resolution wavelet decomposition, Self-Organizing Feature Map (SOFM) neural networks and evolution strategies (ES). A normalized feature image, which represents the local region clarity difference of the corresponding spatial location of two source images, is extracted by wavelet transform without down-sampling. The feature image is clustered by SOFM learning algorithm and every pixel pair in source images is classified into a certain class which indicates different clarity differences. To each pixel pairs in different classes, we use different fusion factors to merge themrespectively, these fusion factors are determined by evolution strategies to achieve the best fusion performance. Experimental results show that the proposed method outperforms the wavelet transform (WT) method.