A framework for unsupervised segmentation of multi-modal medical images

  • Authors:
  • Ayman El-Baz;Aly Farag;Asem Ali;Georgy Gimel'farb;Manuel Casanova

  • Affiliations:
  • Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY;Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY;Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY;Department of Computer Science, University of Auckland, Auckland, New Zealand;Department of Psychiatry, University of Louisville

  • Venue:
  • CVAMIA'06 Proceedings of the Second ECCV international conference on Computer Vision Approaches to Medical Image Analysis
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose new techniques for unsupervised segmentation of multi-modal grayscale images such that each region-of-interest relates to a single dominant mode of the empirical marginal probability distribution of gray levels. We follow most conventional approaches such that initial images and desired maps of regions are described by a joint Markov–Gibbs random field (MGRF) model of independent image signals and interdependent region labels. But our focus is on more accurate model identification. To better specify region borders, each empirical distribution of image signals is precisely approximated by a linear combination of Gaussians (LCG) with positive and negative components. Initial segmentation based on the LCG-models is then iteratively refined by using the MGRF with analytically estimated potentials. The convergence of the overall segmentation algorithm at each stage is discussed. Experiments with medical images show that the proposed segmentation is more accurate than other known alternatives.