Segmenting hippocampus from 7.0 Tesla MR images by combining multiple atlases and auto-context models

  • Authors:
  • Minjeong Kim;Guorong Wu;Wei Li;Li Wang;Young-Don Son;Zang-Hee Cho;Dinggang Shen

  • Affiliations:
  • Department of Radiology and BRIC, University of North Carolina at Chapel Hill;Department of Radiology and BRIC, University of North Carolina at Chapel Hill;Department of Radiology and BRIC, University of North Carolina at Chapel Hill;Department of Radiology and BRIC, University of North Carolina at Chapel Hill;Neuroscience Research Institute, Gachon University of Medicine and Science, Incheon, Korea;Neuroscience Research Institute, Gachon University of Medicine and Science, Incheon, Korea;Department of Radiology and BRIC, University of North Carolina at Chapel Hill

  • Venue:
  • MLMI'11 Proceedings of the Second international conference on Machine learning in medical imaging
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In investigation of neurological diseases, accurate measurement of hippocampus is very important for differentiating inter-subject difference and subtle longitudinal change. Although many automatic segmentation methods have been developed, their performance can be limited by the poor image contrast of hippocampus in the MR images, acquired from either 1.5T or 3.0T scanner. Recently, the emergence of 7.0T scanner sheds new light on the study of hippocampus by providing much higher contrast and resolution. But the automatic segmentation algorithm for 7.0T images still lags behind the development of high-resolution imaging techniques. In this paper, we present a learning-based algorithm for segmenting hippocampi from 7.0T images, by using multi-atlases technique and auto-context models. Specifically, for each atlas (along with other aligned atlases), Auto-Context Model (ACM) is performed to iteratively construct a sequence of classifiers by integrating both image appearance and context features in the local patch. Since there exist plenty of texture information in 7.0T images, more advanced texture features are also extracted and incorporated into the ACM during the training stage. With the use of multiple atlases, multiple sequences of ACM-based classifiers will be trained, respectively in each atlas' space. Thus, in the application stage, a new image will be segmented by first applying the sequence of the learned classifiers of each atlas to it, and then fusing multiple segmentation results from multiple atlases (or multiple sequences of classifiers) by a label-fusion technique. Experimental results on the six 7.0T images with voxel size of 0.35 × 0.35 × 0.35mm3 show much better results obtained by our method than by the method using only the conventional auto-context model.