Multifocus image fusion using region segmentation and spatial frequency
Image and Vision Computing
Multifocus image fusion by combining curvelet and wavelet transform
Pattern Recognition Letters
Feature space and metric measures for fusing multisensor images
International Journal of Remote Sensing
Heartbeat time series classification with support vector machines
IEEE Transactions on Information Technology in Biomedicine - Special section on biomedical informatics
Depth Extension for Multiple Focused Images by Adaptive Point Spread Function
AICI '09 Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence
Fusion of multi-focus images using differential evolution algorithm
Expert Systems with Applications: An International Journal
WAMUS'06 Proceedings of the 6th WSEAS international conference on Wavelet analysis & multirate systems
Multi-focus image fusion based on the neighbor distance
Pattern Recognition
Biometric encryption using enhanced finger print image and elliptic curve
International Journal of Electronic Security and Digital Forensics
Hi-index | 0.00 |
Many vision-related processing tasks, such as edge detection, image segmentation and stereo matching, can be performed more easily when all objects in the scene are in good focus. However, in practice, this may not be always feasible as optical lenses, especially those with long focal lengths, only have a limited depth of field. One common approach to recover an everywhere-in-focus image is to use wavelet-based image fusion. First, several source images with different focuses of the same scene are taken and processed with the discrete wavelet transform (DWT). Among these wavelet decompositions, the wavelet coefficient with the largest magnitude is selected at each pixel location. Finally, the fused image can be recovered by performing the inverse DWT. In this paper, we improve this fusion procedure by applying the discrete wavelet frame transform (DWFT) and the support vector machines (SVM). Unlike DWT, DWFT yields a translation-invariant signal representation. Using features extracted from the DWFT coefficients, a SVM is trained to select the source image that has the best focus at each pixel location, and the corresponding DWFT coefficients are then incorporated into the composite wavelet representation. Experimental results show that the proposed method outperforms the traditional approach both visually and quantitatively.