Efficient multi-modal image registration using local-frequency maps

  • Authors:
  • J. Liu;B. C. Vemuri;F. Bova

  • Affiliations:
  • Department of Computer and Information Science and Engineering, The University of Florida, Gainesville, Fl;Department of Computer and Information Science and Engineering, The University of Florida, Gainesville, Fl;Department of Neurosurgery, The University of Florida, Gainesville, Fl

  • Venue:
  • Machine Vision and Applications - Special issue: IEEE WACV
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Fusing of multi-modal data involves automatically estimating the coordinate transformation required to align the multi-modal image data sets. Most existing methods in literature are not fast enough for practical use (taking more than 30 min to 1 h for estimating non-rigid deformations). We propose a very fast algorithm based on matching local-frequency image representations, which naturally allows for processing the data at different scales or resolutions, a very desirable property from a computational efficiency view point. For the rigid motion case, this algorithm involves minimizing - over all rigid transformations - the expectation of the squared difference between the local-frequency representations of the source and target images. In the non-rigid deformations case, we propose to approximate the non-rigid motion by piece-wise rigid motions and use a novel and fast PDE-based morphing technique that estimates this non-rigid alignment. We present implementation results for synthesized and real (rigid) misalignments between CT and MR brain scans. In both the cases, we validate our results against ground truth registrations which for the former case are known and for the latter are obtained from manual registration performed by an expert. Currently, these manual registrations are used in daily clinical practice. Finally, we present examples of non-rigid registration between T1-weighted MR and T2-weighted MR brain images wherein validation is only qualitatively achieved. Our algorithm's performance is comparable to the results obtained from algorithms based on mutual information in the context of accuracy of estimated rigid transforms but is much faster in computational speed.