Multimodality image registration by maximization of quantitative-qualitative measure of mutual information

  • Authors:
  • Hongxia Luan;Feihu Qi;Zhong Xue;Liya Chen;Dinggang Shen

  • Affiliations:
  • Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200030, China;Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200030, China;Section of Biomedical Image Analysis, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA;Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200030, China;Section of Biomedical Image Analysis, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA

  • Venue:
  • Pattern Recognition
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a novel image similarity measure, referred to as quantitative-qualitative measure of mutual information (Q-MI), for multimodality image registration. Conventional information measures, e.g., Shannon's entropy and mutual information (MI), reflect quantitative aspects of information because they only consider probabilities of events. In fact, each event has its own utility to the fulfillment of the underlying goal, which can be independent of its probability of occurrence. Thus, it is important to consider both quantitative (i.e., probability) and qualitative (i.e., utility) measures of information in order to fully capture the characteristics of events. Accordingly, in multimodality image registration, Q-MI should be used to integrate the information obtained from both the image intensity distributions and the utilities of voxels in the images. Different voxels can have different utilities, for example, in brain images, two voxels can have the same intensity value, but their utilities can be different, e.g., a white matter (WM) voxel near the cortex can have higher utility than a WM voxel inside a large uniform WM region. In Q-MI, the utility of each voxel in an image can be determined according to the regional saliency value calculated from the scale-space map of this image. Since the voxels with higher utility values (or saliency values) contribute more in measuring Q-MI of the two images, the Q-MI-based registration method is much more robust, compared to conventional MI-based registration methods. Also, the Q-MI-based registration method can provide a smoother registration function with a relatively larger capture range. In this paper, the proposed Q-MI has been validated and applied to the rigid registrations of clinical brain images, such as MR, CT and PET images.