Objectively adaptive image fusion

  • Authors:
  • Vladimir Petrovic;Tim Cootes

  • Affiliations:
  • Imaging Science and Biomedical Engineering, University of Manchester, Oxford Road, Manchester M13 9PT, United Kingdom;Imaging Science and Biomedical Engineering, University of Manchester, Oxford Road, Manchester M13 9PT, United Kingdom

  • Venue:
  • Information Fusion
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Signal-level image fusion has been the focus of considerable research attention in recent years with a plethora of algorithms proposed, using a host of image processing and information fusion techniques. Yet what is an optimal information fusion strategy or spectral decomposition that should precede it for any multi-sensor data cannot be defined a priori. This could be learned by either evaluating fusion algorithms subjectively or indeed through a small number of available objective metrics on a large set of relevant sample data. This is not practical however and is limited in that it provides no guarantee of optimal performance should realistic input conditions be different from the sample data. This paper proposes and examines the viability of a powerful framework for objectively adaptive image fusion that explicitly optimises fusion performance for a broad range of input conditions. The idea is to employ the concepts used in objective image fusion evaluation to optimally adapt the fusion process to the input conditions. Specific focus is on fusion for display, which has broad appeal in a wide range of fusion applications such as night vision, avionics and medical imaging. By integrating objective fusion metrics shown to be subjectively relevant into conventional fusion algorithms the framework is used to adapt fusion parameters to achieve optimal fusion display. The results show that the proposed framework achieves a considerable improvement in both level and robustness of fusion performance on a wide array of multi-sensor images and image sequences.