Information Retrieval
Stand-alone objective segmentation quality evaluation
EURASIP Journal on Applied Signal Processing - Image analysis for multimedia interactive services - part I
Fast Multidimensional Parallel Euclidean Distance Transform Based on Mathematical Morphology
SIBGRAPI '01 Proceedings of the 14th Brazilian Symposium on Computer Graphics and Image Processing
Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues
IEEE Transactions on Pattern Analysis and Machine Intelligence
Psychophysical and metric assessment of fused images
APGV '05 Proceedings of the 2nd symposium on Applied perception in graphics and visualization
Image Analysis and Mathematical Morphology
Image Analysis and Mathematical Morphology
Multiscale Fusion of Visible and Thermal IR Images for Illumination-Invariant Face Recognition
International Journal of Computer Vision
Semantic Modeling of Natural Scenes for Content-Based Image Retrieval
International Journal of Computer Vision
Pixel- and region-based image fusion with complex wavelets
Information Fusion
Background-subtraction using contour-based fusion of thermal and visible imagery
Computer Vision and Image Understanding
Video object relevance metrics for overall segmentation quality evaluation
EURASIP Journal on Applied Signal Processing
A novel similarity based quality metric for image fusion
Information Fusion
Cognitively-engineered multisensor image fusion for military applications
Information Fusion
Performance assessment of image fusion
PSIVT'06 Proceedings of the First Pacific Rim conference on Advances in Image and Video Technology
From computational attention to image fusion
Pattern Recognition Letters
Edge preserved image fusion based on multiscale toggle contrast operator
Image and Vision Computing
Edge preserving image fusion based on contourlet transform
ICISP'12 Proceedings of the 5th international conference on Image and Signal Processing
Image matting for fusion of multi-focus images in dynamic scenes
Information Fusion
Hi-index | 0.00 |
The increasing availability and deployment of imaging sensors operating in multiple spectral bands has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, the cognitive aspects of multisensor image fusion have not received much attention in the development of these methods. In this study we investigate how humans interpret visual and infrared images, and we compare the interpretation of these individual image modalities to their fused counterparts, for different image fusion schemes. This was done in an attempt to test to what degree image fusion schemes can enhance human perception of the structural layout and composition of realistic outdoor scenes. We asked human observers to manually segment the details they perceived as most prominent in a set of corresponding visual, infrared and fused images. For each scene, the segmentations of the individual input image modalities were used to derive a joint reference (''gold standard'') contour image that represents the visually most salient details from both of these modalities and for that particular scene. The resulting reference images were then used to evaluate the manual segmentations of the fused images, using a precision-recall measure as the evaluation criterion. In this sense, the best fusion method provides the largest number of correctly perceived details (originating from each of the individual modalities that were used as input for the fusion scheme) and the smallest amount of false alarms (fusion artifacts or illusory details). A comparison with an objective score of subject performance indicates that the reference contour method indeed appears to characterize the performance of observers using the results of the fusion schemes. The results show that this evaluation method can provide valuable insight into the way fusion schemes combine perceptually important details from the individual input image modalities. Given a reference contour image, the method can potentially be used to design image fusion schemes that are optimally tuned to human visual perception for different applications and scenarios (e.g. environmental or weather conditions).