Robust foreground segmentation based on two effective background models

  • Authors:
  • Xi Li;Weiming Hu;Zhongfei Zhang;Xiaoqin Zhang

  • Affiliations:
  • Chinese Academy of Sciences, Beijing, China;Chinese Academy of Sciences, Beijing, China;State University of New York, Binghamton, NY, USA;Chinese Academy of Sciences, Beijing, China

  • Venue:
  • MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Foreground segmentation is a common foundation for many computer vision applications such as tracking and behavior analysis. Most existing algorithms for foreground segmentation learn pixel-based statistical models, which are sensitive to dynamic scenes such as illumination change, shadow movement, and swaying trees. In order to address this problem, we propose two block-based background models using the recently developed incremental rank-(R1, R2, R3) tensor-based subspace learning algorithm (referred to as IRTSA [1]). These two IRTSA-based background models (i.e., IRTSAGBM and IRTSA-CBM respectively for grayscale and color images) incrementally learn low-order tensor-based eigenspace representations to fully capture the intrinsic spatio-temporal characteristics of a scene, leading to robust foreground segmentation results.Theoretic analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed background models.