A generative model for probabilistic label fusion of multimodal data

  • Authors:
  • Juan Eugenio Iglesias;Mert Rory Sabuncu;Koen Van Leemput

  • Affiliations:
  • Martinos Center for Biomedical Imaging, MGH, Harvard Medical School;Martinos Center for Biomedical Imaging, MGH, Harvard Medical School;Martinos Center for Biomedical Imaging, MGH, Harvard Medical School, USA, Department of Informatics and Mathematical Modeling, DTU, Denmark, Departments of Information and Computer Science and of ...

  • Venue:
  • MBIA'12 Proceedings of the Second international conference on Multimodal Brain Image Analysis
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The maturity of registration methods, in combination with the increasing processing power of computers, has made multi-atlas segmentation methods practical. The problem of merging the deformed label maps from the atlases is known as label fusion. Even though label fusion has been well studied for intramodality scenarios, it remains relatively unexplored when the nature of the target data is multimodal or when its modality is different from that of the atlases. In this paper, we review the literature on label fusion methods and also present an extension of our previously published algorithm to the general case in which the target data are multimodal. The method is based on a generative model that exploits the consistency of voxel intensities within the target scan based on the current estimate of the segmentation. Using brain MRI scans acquired with a multiecho FLASH sequence, we compare the method with majority voting, statistical-atlas-based segmentation, the popular package FreeSurfer and an adaptive local multi-atlas segmentation method. The results show that our approach produces highly accurate segmentations (Dice 86.3% across 22 brain structures of interest), outperforming the competing methods.