Low-rank sparse learning for robust visual tracking
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Robust visual tracking with discriminative sparse learning
Pattern Recognition
Hierarchical object representations for visual recognition via weakly supervised learning
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
A survey of appearance models in visual object tracking
ACM Transactions on Intelligent Systems and Technology (TIST) - Survey papers, special sections on the semantic adaptive social web, intelligent systems for health informatics, regular papers
Collaborative object tracking model with local sparse representation
Journal of Visual Communication and Image Representation
Multi-Target Tracking by Online Learning a CRF Model of Appearance and Motion Patterns
International Journal of Computer Vision
Hi-index | 0.00 |
In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing ℓp, q mixed norms (p Є {2, ∞} and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker [15] is a special case of our MTT formulation (denoted as the L11 tracker) when p = q = 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.