Visual Tracking via Particle Filtering on the Affine Group
International Journal of Robotics Research
Heavy-Tailed model for visual tracking via robust subspace learning
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part II
Advances in matrix manifolds for computer vision
Image and Vision Computing
Hi-index | 0.00 |
Most existing appearance models for visual tracking usually construct a pixel-based representation of object appearance so that they are incapable of fully capturing both global and local spatial layout information of object appearance. In order to address this problem, we propose a novel spatial Log-Euclidean appearance model (referred as SLAM) under the recently introduced Log-Euclidean Riemannian metric [23]. SLAM is capable of capturing both the global and local spatial layout information of object appearance by constructing a block-based Log-Euclidean eigenspace representation. Specifically, the process of learning the proposed SLAM consists of five steps--appearance block division, online Log-Euclidean eigenspace learning, local spatial weighting, global spatial weighting, and likelihood evaluation. Furthermore, a novel online Log-Euclidean Riemannian subspace learning algorithm (IRSL) [14] is applied to incrementally update the proposed SLAM. Tracking is then led by the Bayesian state inference framework in which a particle filter is used for propagating sample distributions over the time. Theoretic analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed SLAM.