Moving foreground object detection via robust SIFT trajectories

  • Authors:
  • Shih-Wei Sun;Yu-Chiang Frank Wang;Fay Huang;Hong-Yuan Mark Liao

  • Affiliations:
  • Dept. of New Media Art, Taipei National University of the Arts, Taipei, Taiwan and Center for Art and Technology, Taipei National University of the Arts, Taipei, Taiwan;Research Center for IT Innovation, Academia Sinica, Taipei, Taiwan and Inst. of Information Science, Academia Sinica, Taipei, Taiwan;Inst. of Computer Science and Info. Engineering, National Ilan University, Yi-Lan, Taiwan;Inst. of Information Science, Academia Sinica, Taipei, Taiwan and Dept. of Computer Science and Info. Engineering, National Chiao Tung University, Hsinchu, Taiwan

  • Venue:
  • Journal of Visual Communication and Image Representation
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data.