Shape and motion from image streams under orthography: a factorization method
International Journal of Computer Vision
Learning Patterns of Activity Using Real-Time Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Segmenting Foreground Objects from a Dynamic Textured Background via a Robust Kalman Filter
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Statistical Background Subtraction for a Mobile Observer
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Effective Gaussian Mixture Learning for Video Background Subtraction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayesian Object Detection in Dynamic Scenes
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Particle Video: Long-Range Motion Estimation using Point Trajectories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Real-time foreground-background segmentation using codebook model
Real-Time Imaging
Beyond pixels: exploring new representations and applications for motion analysis
Beyond pixels: exploring new representations and applications for motion analysis
Motion-based background subtraction using adaptive kernel density estimation
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.01 |
We propose an on-line algorithm to segment foreground from background in videos captured by a moving camera. In our algorithm, temporal model propagation and spatial model composition are combined to generate foreground and background models, and likelihood maps are computed based on the models. After that, an energy minimization technique is applied to the likelihood maps for segmentation. In the temporal step, block-wise models are transferred from the previous frame using motion information, and pixel-wise foreground/background likelihoods and labels in the current frame are estimated using the models. In the spatial step, another block-wise foreground/background models are constructed based on the models and labels given by the temporal step, and the corresponding per-pixel likelihoods are also generated. A graph-cut algorithm performs segmentation based on the foreground/background likelihood maps, and the segmentation result is employed to update the motion of each segment in a block; the temporal model propagation and the spatial model composition step are re-evaluated based on the updated motions, by which the iterative procedure is implemented. We tested our framework with various challenging videos involving large camera and object motions, significant background changes and clutters.