Multi-target tracking in time-lapse video forensics
MiFor '09 Proceedings of the First ACM workshop on Multimedia in forensics
Tracking people in video sequences using multiple models
Multimedia Tools and Applications
Closed-loop adaptation for robust tracking
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
A stochastic graph evolution framework for robust multi-target tracking
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Robust and fast collaborative tracking with two stage sparse optimization
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Expert Systems with Applications: An International Journal
Visual tracking using the Earth Mover's Distance between Gaussian mixtures and Kalman filtering
Image and Vision Computing
Motion segmentation by model-based clustering of incomplete trajectories
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
International Journal of Computer Vision
Backtracking: Retrospective multi-target tracking
Computer Vision and Image Understanding
A novel framework for motion segmentation and tracking by clustering incomplete trajectories
Computer Vision and Image Understanding
Context-based scene recognition from visual data in smart homes: an Information Fusion approach
Personal and Ubiquitous Computing
Dynamic context for tracking behind occlusions
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
Online spatio-temporal structural context learning for visual tracking
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Robust Visual Tracking via Structured Multi-Task Sparse Learning
International Journal of Computer Vision
Tracking in dense crowds using prominence and neighborhood motion concurrence
Image and Vision Computing
Hi-index | 0.15 |
Enormous uncertainties in unconstrained environments lead to a fundamental dilemma that many tracking algorithms have to face in practice: Tracking has to be computationally efficient, but verifying whether or not the tracker is following the true target tends to be demanding, especially when the background is cluttered and/or when occlusion occurs. Due to the lack of a good solution to this problem, many existing methods tend to be either effective but computationally intensive by using sophisticated image observation models or efficient but vulnerable to false alarms. This greatly challenges long-duration robust tracking. This paper presents a novel solution to this dilemma by considering the context of the tracking scene. Specifically, we integrate into the tracking process a set of auxiliary objects that are automatically discovered in the video on the fly by data mining. Auxiliary objects have three properties, at least in a short time interval: 1) persistent co-occurrence with the target, 2) consistent motion correlation to the target, and 3) easy to track. Regarding these auxiliary objects as the context of the target, the collaborative tracking of these auxiliary objects leads to efficient computation as well as strong verification. Our extensive experiments have exhibited exciting performance in very challenging real-world testing cases.