Adaptive background subtraction with multiple feedbacks for video surveillance

  • Authors:
  • Liyuan Li;Ruijiang Luo;Weimin Huang;Karianto Leman;Wei-Yun Yau

  • Affiliations:
  • Institute for Infocomm Research, Singapore;Institute for Infocomm Research, Singapore;Institute for Infocomm Research, Singapore;Institute for Infocomm Research, Singapore;Institute for Infocomm Research, Singapore

  • Venue:
  • ISVC'05 Proceedings of the First international conference on Advances in Visual Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Background subtraction is the first step for video surveillance. Existing methods almost all update their background models with a constant learning rate, which makes them not adaptive to some complex situations, e.g., crowded scenes or objects staying for a long time. In this paper, a novel framework which integrates both positive and negative feedbacks to control the learning rate is proposed. The negative feedback comes from background contextual analysis and the positive feedback comes from the foreground region analysis. Two descriptors of global contextual features are proposed and the visibility measures of background regions are derived based on contextual descriptors. Spatial-temporal features of the foreground regions are exploited. Fusing both positive and negative feedbacks, suitable strategy of background updating for specified surveillance task can be implemented. Three strategies for short-term, selective and long-term surveillance have been implemented and tested. Improved results compared with conventional background subtraction have been obtained.