IEEE Transactions on Pattern Analysis and Machine Intelligence
CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Learning of an Activity-Based Semantic Scene Model
AVSS '03 Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance
Recursive Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tracking Multiple Humans in Complex Situations
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Noniterative Greedy Algorithm for Multiframe Point Correspondence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tracking a Variable Number of Human Groups in Video Using Probability Hypothesis Density
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 03
Spatio-Temporal Context for Robust Multitarget Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rao-Blackwellized particle filter for multiple target tracking
Information Fusion
Interaction between high-level and low-level image analysis for semantic video object extraction
EURASIP Journal on Applied Signal Processing
Learning semantic scene models by trajectory analysis
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part III
Efficient Multitarget Visual Tracking Using Random Finite Sets
IEEE Transactions on Circuits and Systems for Video Technology
Multi-clue based multi-camera face capturing
ICIMCS '10 Proceedings of the Second International Conference on Internet Multimedia Computing and Service
Game-theoretical occlusion handling for multi-target visual tracking
Pattern Recognition
Multi-target tracking on confidence maps: An application to people tracking
Computer Vision and Image Understanding
Hi-index | 0.01 |
We propose a framework for multitarget tracking with feedback that accounts for scene contextual information. We demonstrate the framework on two types of context-dependent events, namely target births (i.e., objects entering the scene or reappearing after occlusion) and spatially persistent clutter. The spatial distributions of birth and clutter events are incrementally learned based on mixtures of Gaussians. The corresponding models are used by a probability hypothesis density (PHD) filter that spatially modulates its strength based on the learned contextual information. Experimental results on a large video surveillance dataset using a standard evaluation protocol show that the feedback improves the tracking accuracy from 9% to 14% by reducing the number of false detections and false trajectories. This performance improvement is achieved without increasing the computational complexity of the tracker.