Learning contextual variations for video segmentation

  • Authors:
  • Vincent Martin;Monique Thonnat

  • Affiliations:
  • INRIA Sophia Antipolis, PULSAR, Sophia Antipolis;INRIA Sophia Antipolis, PULSAR, Sophia Antipolis

  • Venue:
  • ICVS'08 Proceedings of the 6th international conference on Computer vision systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper deals with video segmentation in vision systems. We focus on the maintenance of background models in long-term videos of changing environment which is still a real challenge in video surveillance. We propose an original weakly supervised method for learning contextual variations in videos. Our approach uses a clustering algorithm to automatically identify different contexts based on image content analysis. Then, state-of-the-art video segmentation algorithms (e.g. codebook, MoG) are trained on each cluster. The goal is to achieve a dynamic selection of background models. We have experimented our approach on a long video sequence (24 hours). The presented results show the segmentation improvement of our approach compared to codebook and MoG.