Diffusion Kernels on Graphs and Other Discrete Input Spaces
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Online Selection of Discriminative Tracking Features
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
An Adaptive Appearance Model Approach for Model-based Articulated Object Tracking
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Robust Fragments-based Tracking using the Integral Histogram
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
IEEE Transactions on Pattern Analysis and Machine Intelligence
Incremental Learning for Robust Visual Tracking
International Journal of Computer Vision
Online Tracking and Reacquisition Using Co-trained Generative and Discriminative Trackers
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
Biased discriminant euclidean embedding for content-based image retrieval
IEEE Transactions on Image Processing
Discriminative spatial attention for robust tracking
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Visual tracking using a pixelwise spatiotemporal oriented energy representation
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Robust and fast collaborative tracking with two stage sparse optimization
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Discriminative tracking by metric learning
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
Probabilistic tracking in joint feature-spatial spaces
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Adaptive and discriminative metric differential tracking
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Context tracker: Exploring supporters and distracters in unconstrained environments
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Real-time visual tracking using compressive sensing
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Minimum error bounded efficient $/ell _1$ tracker with occlusion detection
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
IEEE Transactions on Multimedia
Robust online appearance models for visual tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shifted subspaces tracking on sparse outlier for motion segmentation
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
In this paper, we propose a robust distracter-resistant tracking approach by learning a discriminative metric that adaptively learns the importance of features on-the-fly. The proposed metric is elaborately designed for the tracking problem by forming a margin objective function which systematically includes distance margin maximization and reconstruction error constraint that acts as a force to push distracters away from the positive space and into the negative space. Due to the variety of negative samples in the tracking problem, we specifically introduce the similarity propagation technique that gives distracters a second force from the negative space. Consequently, the discriminative metric obtained helps to preserve the most discriminative information to separate the target from distracters while ensuring the stability of the optimal metric. We seamlessly combine it with the popular L1 minimization tracker. Our tracker is therefore not only resistant to distracters, but also inherits the merit of occlusion robustness from the L1 tracker. Quantitative comparisons with several state-of-the-art algorithms have been conducted in many challenging video sequences. The results show that our method resists distracters excellently and achieves superior performance.