Alignment by Maximization of Mutual Information
International Journal of Computer Vision
The Correlation Ratio as a New Similarity Measure for Multimodal Image Registration
MICCAI '98 Proceedings of the First International Conference on Medical Image Computing and Computer-Assisted Intervention
Multi-modal Image Registration by Minimising Kullback-Leibler Distance
MICCAI '02 Proceedings of the 5th International Conference on Medical Image Computing and Computer-Assisted Intervention-Part II
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Bayesian Registration via Local Image Regions: Information, Selection and Marginalization
IPMI '09 Proceedings of the 21st International Conference on Information Processing in Medical Imaging
Nonrigid image registration using conditional mutual information
IPMI'07 Proceedings of the 20th international conference on Information processing in medical imaging
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
High-Dimensional normalized mutual information for image registration using random lines
WBIR'06 Proceedings of the Third international conference on Biomedical Image Registration
Hi-index | 0.00 |
We propose a method to efficiently compute mutual information between high-dimensional distributions of image patches. This in turn is used to perform accurate registration of images captured under different modalities, while exploiting their local structure otherwise missed in traditional mutual information definition. We achieve this by organizing the space of image patches into orbits under the action of Euclidean transformations of the image plane, and estimating the modes of a distribution in such an orbit space using affinity propagation. This way, large collections of patches that are equivalent up to translations and rotations are mapped to the same representative, or "dictionary element". We then show analytically that computing mutual information for a joint distribution in this space reduces to computing mutual information between the (scalar) label maps, and between the transformations mapping each patch into its closest dictionary element. We show that our approach improves registration performance compared with the state of the art in multimodal registration, using both synthetic and real images with quantitative ground truth.