Multimodal Image Registration by Information Fusion at Feature Level

  • Authors:
  • Yang Li;Ragini Verma

  • Affiliations:
  • Department of Radiology, University of Pennsylvania, Philadelphia, USA 19104;Department of Radiology, University of Pennsylvania, Philadelphia, USA 19104

  • Venue:
  • MICCAI '09 Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a novel multimodal image registration method which can fully utilize the multimodal information and result in a more accurate unified deformation field. Different from the existing methods which fuse the information at the image/intensity level, the proposed method fuses the multimodal information at the feature level through Gabor wavelets transformation. At this level, complementary and redundant information is distinguished reliably and efficiently, and then combined and removed respectively. Experiments on both simulated and real T1+DTI image sets illustrate that the proposed method can effectively incorporate better characterization for white matter (WM) from the DTI and for gray matter (GM) from the T1 image and lead to a more accurate and efficient multimodal image registration which paves the way for the subsequent multimodal population-based studies.