Learning-based meta-algorithm for MRI brain extraction

  • Authors:
  • Feng Shi;Li Wang;John H. Gilmore;Weili Lin;Dinggang Shen

  • Affiliations:
  • IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill;IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill;Department of Psychiatry, University of North Carolina at Chapel Hill;MRI Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill;IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill

  • Venue:
  • MICCAI'11 Proceedings of the 14th international conference on Medical image computing and computer-assisted intervention - Volume Part III
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956±0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.