Background subtraction using low rank and group sparsity constraints

  • Authors:
  • Xinyi Cui;Junzhou Huang;Shaoting Zhang;Dimitris N. Metaxas

  • Affiliations:
  • CS Dept., Rutgers University, Piscataway, NJ;CSE Dept., Univ. of Texas at Arlington, Arlington, TX;CS Dept., Rutgers University, Piscataway, NJ;CS Dept., Rutgers University, Piscataway, NJ

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Background subtraction has been widely investigated in recent years. Most previous work has focused on stationary cameras. Recently, moving cameras have also been studied since videos from mobile devices have increased significantly. In this paper, we propose a unified and robust framework to effectively handle diverse types of videos, e.g., videos from stationary or moving cameras. Our model is inspired by two observations: 1) background motion caused by orthographic cameras lies in a low rank subspace, and 2) pixels belonging to one trajectory tend to group together. Based on these two observations, we introduce a new model using both low rank and group sparsity constraints. It is able to robustly decompose a motion trajectory matrix into foreground and background ones. After obtaining foreground and background trajectories, the information gathered on them is used to build a statistical model to further label frames at the pixel level. Extensive experiments demonstrate very competitive performance on both synthetic data and real videos.