Error decreasing of background subtraction process by modeling the foreground

  • Authors:
  • Christophe Gabard;Laurent Lucat;Catherine Achard;C. Guillot;Patrick Sayd

  • Affiliations:
  • CEA LIST, Vision and Content Engineering Laboratory, Gif-sur-Yvette, France;CEA LIST, Vision and Content Engineering Laboratory, Gif-sur-Yvette, France;UPMC Univ Paris 06, Institute of Intelligent Systems and Robotics, Paris Cedex, France;CEA LIST, Vision and Content Engineering Laboratory, Gif-sur-Yvette, France;CEA LIST, Vision and Content Engineering Laboratory, Gif-sur-Yvette, France

  • Venue:
  • ACCV'10 Proceedings of the 2010 international conference on Computer vision - Volume Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Background subtraction is often one of the first tasks involved in video surveillance applications. Classical methods use a statistical background model and compute a distance between each part (pixel or bloc) of the current frame and the model to detect moving targets. Segmentation is then obtained by thresholding this distance. This commonly used approach suffers from two main drawbacks. First, the segmentation is blinded done, without considering the foreground appearance. Secondly, threshold value is often empirically specified, according to visual quality evaluation; it means both that the value is scene-dependant and that its setting is not automated using objective criterion. In order to address these drawbacks, we introduce in this article a foreground model to improve the segmentation process. Several segmentation strategies are proposed, and theoretically as well as experimentally compared. Thanks to theoretical error estimation, an optimal segmentation threshold can be deduced to control segmentation behaviour like hold an especially targeted false alarm rate. This approach improves segmentation results in video surveillance applications, in some difficult situations as non-stationary background.