Adaptable Neural Networks for Objects' Tracking Re-initialization
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Spatial-temporal nonparametric background subtraction in dynamic scenes
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Multimedia Tools and Applications
Background subtraction based on phase feature and distance transform
Pattern Recognition Letters
Spatially correlated background subtraction, based on adaptive background maintenance
Journal of Visual Communication and Image Representation
Enhanced extraction of moving objects in variable bit-rate video streams
Proceedings of the 20th ACM international conference on Multimedia
Adaptive visual obstacle detection for mobile robots using monocular camera and ultrasonic sensor
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume 2
Background foreground segmentation with RGB-D Kinect data: An efficient combination of classifiers
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
Most statistical background subtraction techniques are based on the analysis of temporal color/intensity distribution. However, learning statistics on a series of time frames can be problematic, especially when no frame absent of moving objects is available or when the available memory is not sufficient to store the series of frames needed for learning. In this letter, we propose a spatial variation to the traditional temporal framework. The proposed framework allows statistical motion detection with methods trained on one background frame instead of a series of frames as is usually the case. Our framework includes two spatial background subtraction approaches suitable for different applications. The first approach is meant for scenes having a nonstatic background due to noise, camera jitter or animation in the scene (e.g.,waving trees, fluttering leaves). This approach models each pixel with two PDFs: one unimodal PDF and one multimodal PDF, both trained on one background frame. In this way, the method can handle backgrounds with static and nonstatic areas. The second spatial approach is designed to use as little processing time and memory as possible. Based on the assumption that neighboring pixels often share similar temporal distribution, this second approach models the background with one global mixture of Gaussians.