Multifocus image fusion using the nonsubsampled contourlet transform
Signal Processing
Image Based Quantitative Mosaic Evaluation with Artificial Video
SCIA '09 Proceedings of the 16th Scandinavian Conference on Image Analysis
Comparative studies on multispectral palm image fusion for biometrics
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part II
A novel multi-focus image fusion algorithm using edge information and K-mean segmentation
ICICS'09 Proceedings of the 7th international conference on Information, communications and signal processing
Hi-index | 0.00 |
Image fusion as a way of combining multiple image signals into a single fused image has in recent years been extensively researched for a variety of multisensor applications. Choosing an optimal fusion approach for each application from the plethora of algorithms available however, remains a largely open issue. A small number of metrics proposed so far provide only a rough, numerical estimate of fusion performance with limited understanding of the relative merits of different fusion schemes. This paper proposes a method for comprehensive, objective, image fusion performance characterisation using a fusion evaluation framework based on gradient information representation. The method provides an in-depth analysis of fusion performance by quantifying: information contributions by each sensor, fusion gain, fusion information loss and fusion artifacts (artificial information created). It is demonstrated on the evaluation of an extensive dataset of multisensor images fused with a wide range of established image fusion algorithms. The results demonstrate and quantify a number of well known issues concerning the performance of these schemes and provide a useful insight into a number of more subtle yet important fusion performance effects not immediately accessible to an observer.