Learning based non-rigid multi-modal image registration using Kullback-Leibler divergence

  • Authors:
  • Christoph Guetter;Chenyang Xu;Frank Sauer;Joachim Hornegger

  • Affiliations:
  • Imaging & Visualization Department, Siemens Corporate Research, Princeton and Institute of Computer Science, Universität Erlangen-Nürnberg, Erlangen, Germany;Imaging & Visualization Department, Siemens Corporate Research, Princeton;Imaging & Visualization Department, Siemens Corporate Research, Princeton;Institute of Computer Science, Universität Erlangen-Nürnberg, Erlangen, Germany

  • Venue:
  • MICCAI'05 Proceedings of the 8th international conference on Medical image computing and computer-assisted intervention - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The need for non-rigid multi-modal registration is becoming increasingly common for many clinical applications. To date, however, existing proposed techniques remain as largely academic research effort with very few methods being validated for clinical product use. It has been suggested by Crum et al. [1] that the context-free nature of these methods is one of the main limitations and that moving towards context-specific methods by incorporating prior knowledge of the underlying registration problem is necessary to achieve registration results that are accurate and robust enough for clinical applications. In this paper, we propose a novel non-rigid multi-modal registration method using a variational formulation that incorporates a prior learned joint intensity distribution. The registration is achieved by simultaneously minimizing the Kullback-Leibler divergence between an observed and a learned joint intensity distribution and maximizing the mutual information between reference and alignment images. We have applied our proposed method on both synthetic and real images with encouraging results.