Online moving camera background subtraction

  • Authors:
  • Ali Elqursh;Ahmed Elgammal

  • Affiliations:
  • Rutgers University;Rutgers University

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently several methods for background subtraction from moving camera were proposed. They use bottom up cues to segment video frames into foreground and background regions. Due to this lack of explicit models, they can easily fail to detect a foreground object when such cues are ambiguous in certain parts of the video. This becomes even more challenging when videos need to be processed online. We present a method which enables learning of pixel based models for foreground and background regions and, in addition, segments each frame in an online framework. The method uses long term trajectories along with a Bayesian filtering framework to estimate motion and appearance models. We compare our method to previous approaches and show results on challenging video sequences.