Robust Detection of Abandoned and Removed Objects in Complex Surveillance Videos

  • Authors:
  • YingLi Tian;R. S. Feris; Haowei Liu;A. Hampapur; Ming-Ting Sun

  • Affiliations:
  • IBM T.J. Watson Res. Center, Yorktown Heights, NY, USA;-;-;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tracking-based approaches for abandoned object detection often become unreliable in complex surveillance videos due to occlusions, lighting changes, and other factors. We present a new framework to robustly and efficiently detect abandoned and removed objects based on background subtraction (BGS) and foreground analysis with complement of tracking to reduce false positives. In our system, the background is modeled by three Gaussian mixtures. In order to handle complex situations, several improvements are implemented for shadow removal, quick-lighting change adaptation, fragment reduction, and keeping a stable update rate for video streams with different frame rates. Then, the same Gaussian mixture models used for BGS are employed to detect static foreground regions without extra computation cost. Furthermore, the types of the static regions (abandoned or removed) are determined by using a method that exploits context information about the foreground masks, which significantly outperforms previous edge-based techniques. Based on the type of the static regions and user-defined parameters (e.g., object size and abandoned time), a matching method is proposed to detect abandoned and removed objects. A person-detection process is also integrated to distinguish static objects from stationary people. The robustness and efficiency of the proposed method is tested on IBM Smart Surveillance Solutions for public safety applications in big cities and evaluated by several public databases, such as The Image library for intelligent detection systems (i-LIDS) and IEEE Performance Evaluation of Tracking and Surveillance Workshop (PETS) 2006 datasets. The test and evaluation demonstrate our method is efficient to run in real-time, while being robust to quick-lighting changes and occlusions in complex environments.