Advances in variational image segmentation using AM-FM models: regularized demodulation and probabilistic cue integration

  • Authors:
  • Georgios Evangelopoulos;Iasonas Kokkinos;Petros Maragos

  • Affiliations:
  • Computer Vision, Speech Communication and Signal Processing Group, National Technical University of Athens, Greece;Computer Vision, Speech Communication and Signal Processing Group, National Technical University of Athens, Greece;Computer Vision, Speech Communication and Signal Processing Group, National Technical University of Athens, Greece

  • Venue:
  • VLSM'05 Proceedings of the Third international conference on Variational, Geometric, and Level Set Methods in Computer Vision
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current state-of-the-art methods in variational image segmentation using level set methods are able to robustly segment complex textured images in an unsupervised manner. In recent work, [18,19] we have explored the potential of AM-FM features for driving the unsupervised segmentation of a wide variety of textured images. Our first contribution in this work is at the feature extraction level, where we introduce a regularized approach to the demodulation of the AM-FM -modelled signals. By replacing the cascade of multiband filtering and subsequent differentiation with analytically derived equivalent filtering operations, increased noise-robustness can be achieved, while discretization problems in the implementation of the demodulation algorithm are alleviated. Our second contribution is based on a generative model we have recently proposed [18,20] that offers a measure related to the local prominence of a specific class of features, like edges and textures. The introduction of these measures as weighting terms in the evolution equations facilitates the fusion of different cues in a simple and efficient manner. Our systematic evaluation on the Berkeley segmentation benchmark demonstrates that this fusion method offers improved results when compared to our previous work as well as current state-of-the-art methods.