A spatially distributed model for foreground segmentation

  • Authors:
  • Patrick Dickinson;Andrew Hunter;Kofi Appiah

  • Affiliations:
  • Center for Visual Surveillance and Machine Perception, University of Lincoln, Lincoln, UK;Center for Visual Surveillance and Machine Perception, University of Lincoln, Lincoln, UK;Center for Visual Surveillance and Machine Perception, University of Lincoln, Lincoln, UK

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Foreground segmentation is a fundamental first processing stage for vision systems which monitor real-world activity. In this paper, we consider the problem of achieving robust segmentation in scenes where the appearance of the background varies unpredictably over time. Variations may be caused by processes such as moving water, or foliage moved by wind, and typically degrade the performance of standard per-pixel background models. Our proposed approach addresses this problem by modeling homogeneous regions of scene pixels as an adaptive mixture of Gaussians in color and space. Model components are used to represent both the scene background and moving foreground objects. Newly observed pixel values are probabilistically classified, such that the spatial variance of the model components supports correct classification even when the background appearance is significantly distorted. We evaluate our method over several challenging video sequences, and compare our results with both per-pixel and Markov Random Field based models. Our results show the effectiveness of our approach in reducing incorrect classifications.