Multimodal registration via spatial-context mutual information

  • Authors:
  • Zhao Yi;Stefano Soatto

  • Affiliations:
  • University of California, Los Angeles;University of California, Los Angeles

  • Venue:
  • IPMI'11 Proceedings of the 22nd international conference on Information processing in medical imaging
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a method to efficiently compute mutual information between high-dimensional distributions of image patches. This in turn is used to perform accurate registration of images captured under different modalities, while exploiting their local structure otherwise missed in traditional mutual information definition. We achieve this by organizing the space of image patches into orbits under the action of Euclidean transformations of the image plane, and estimating the modes of a distribution in such an orbit space using affinity propagation. This way, large collections of patches that are equivalent up to translations and rotations are mapped to the same representative, or "dictionary element". We then show analytically that computing mutual information for a joint distribution in this space reduces to computing mutual information between the (scalar) label maps, and between the transformations mapping each patch into its closest dictionary element. We show that our approach improves registration performance compared with the state of the art in multimodal registration, using both synthetic and real images with quantitative ground truth.