Robust Medical Images Segmentation Using Learned Shape and Appearance Models

  • Authors:
  • Ayman El-Baz;Georgy Gimel'Farb

  • Affiliations:
  • Bioimaging Laboratory, Bioengineering Department, University of Louisville, Louisville, USA;Department of Computer Science, University of Auckland, Auckland, New Zealand

  • Venue:
  • MICCAI '09 Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel parametric deformable model controlled by shape and visual appearance priors learned from a training subset of co-aligned medical images of goal objects. The shape prior is derived from a linear combination of vectors of distances between the training boundaries and their common centroid. The appearance prior considers gray levels within each training boundary as a sample of a Markov-Gibbs random field with pairwise interaction. Spatially homogeneous interaction geometry and Gibbs potentials are analytically estimated from the training data. To accurately separate a goal object from an arbitrary background, empirical marginal gray level distributions inside and outside of the boundary are modeled with adaptive linear combinations of discrete Gaussians (LCDG). Due to the analytical shape and appearance priors and a simple Expectation-Maximization procedure for getting the object and background LCDG, our segmentation is considerably faster than with most of the known geometric and parametric models. Experiments with various goal images confirm the robustness, accuracy, and speed of our approach.