MGRF controlled stochastic deformable model

  • Authors:
  • Ayman El-Baz;Aly Farag;Georgy Gimelfarb

  • Affiliations:
  • Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY;Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY;Department of Computer Science, University of Auckland, Auckland, New Zealand

  • Venue:
  • SCIA'05 Proceedings of the 14th Scandinavian conference on Image Analysis
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Deformable or active contour, and surface models are powerful image segmentation techniques. We introduce a novel fast and robust bi-directional parametric deformable model which is able to segment regions of intricate shape in multi-modal greyscale images. The power of the algorithm in terms of computation time and robustness is owing to the use of joint probabilities of the signals and region labels in individual points as external forces guiding the model evolution. These joint probabilities are derived from a Markov–Gibbs random field (MGRF) image model considering an image as a sample of two interrelated spatial stochastic processes. The low level process with conditionally independent and arbitrarily distributed signals relates to the observed image whereas its hidden map of regions is represented with the high level MGRF of interdependent region labels. Marginal probability distributions of signals in each region are recovered from a mixed empirical signal distribution over the whole image. In so doing, each marginal is approximated with a linear combination of Gaussians (LCG) having both positive and negative components. The LCG parameters are estimated using our previously proposed modification of the EM algorithm, and the high-level Gibbs potentials are computed analytically. Comparative experiments show that the proposed model outlines complicated boundaries of different modal objects much more accurately than other known counterparts.