On multisensor image fusion performance limits from an estimation theory perspective

  • Authors:
  • Rick S. Blum

  • Affiliations:
  • ECE Department, Lehigh University, 19 Memorial Drive West, Bethlehem, PA 18015-3084, USA

  • Venue:
  • Information Fusion
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Image fusion algorithms attempt to produce a single fused image that is more informative than any of the multiple source images used to produce the fused image. Analytical studies of image fusion performance have been lacking. Such studies can augment existing experimental studies by addressing some aspects that are difficult to study using experimental methods. Here, an estimation theory approach is employed using a mathematical model based on the observation that each different sensor can provide a different quality when viewing a given object in the scene. One sensor may be better for viewing one object and a different sensor may be better for viewing a different object. The model also acknowledges that distortion and noise will enter into the sensor observations. This model allows us to employ known estimation theory techniques to find the best possible fusion performance, measured in terms of the standard estimation theory measure of performance. This performance measure has not yet received attention in the image fusion community. Some interesting results include the demonstration that a particular weighted averaging approach is shown to yield optimum estimation performance for the model we focus on. It is also shown that it is important to employ a priori information that describes which sensor is able to provide a good view of the important objects in the scene. The essential aspects of some frequently employed fusion approaches are studied and the capabilities of these approaches are analyzed and compared to the best fusion algorithms. We hope this study will encourage further analytical studies of image fusion.