Stationary background generation: an alternative to the difference of two images
Pattern Recognition
W4: Real-Time Surveillance of People and Their Activities
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical background modeling for non-stationary camera
Pattern Recognition Letters
Color active shape models for tracking non-rigid objects
Pattern Recognition Letters - Special issue: Colour image processing and analysis
Foreground Object Detection in Changing Background Based on Color Co-Occurrence Statistics
WACV '02 Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision
Illumination Assessment for Vision-Based Traffic Monitoring
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume III-Volume 7276 - Volume 7276
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recent advances in visual and infrared face recognition: a review
Computer Vision and Image Understanding
Point fingerprint: a new 3-D object representation scheme
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Detecting moving objects, ghosts, and shadows in video streams
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient moving object segmentation algorithm using background registration technique
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
In this paper, we present a novel method for generating background that adopts frame difference and a median filter to sensitive areas where illumination changes occur. The proposed method also uses fewer frames than the existing methods. Background generation is widely used as a preprocessing for video-based tracking, surveillance, and object detection. The proposed background generation method utilizes differences and motion changes between two consecutive frames to cope with the changes of illumination in an image sequence. It also utilizes a median filter to adaptively generate a robust background. The proposed method enables more efficient background reconstruction with fewer frames than existing methods use.