Multimodal Data Fusion Based on Mutual Information

  • Authors:
  • Roger Bramon;Imma Boada;Anton Bardera;Joaquim Rodriguez;Miquel Feixas;Josep Puig;Mateu Sbert

  • Affiliations:
  • University of Girona, Girona;University of Girona, Girona;University of Girona, Girona;University of Girona, Girona;University of Girona, Girona;Josep Trueta Hospital of Girona, Girona;Universitat de Girona, Girona

  • Venue:
  • IEEE Transactions on Visualization and Computer Graphics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal visualization aims at fusing different data sets so that the resulting combination provides more information and understanding to the user. To achieve this aim, we propose a new information-theoretic approach that automatically selects the most informative voxels from two volume data sets. Our fusion criteria are based on the information channel created between the two input data sets that permit us to quantify the information associated with each intensity value. This specific information is obtained from three different ways of decomposing the mutual information of the channel. In addition, an assessment criterion based on the information content of the fused data set can be used to analyze and modify the initial selection of the voxels by weighting the contribution of each data set to the final result. The proposed approach has been integrated in a general framework that allows for the exploration of volumetric data models and the interactive change of some parameters of the fused data set. The proposed approach has been evaluated on different medical data sets with very promising results.