Color-Based Probabilistic Tracking
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Probabilistic Object Tracking Using Multiple Features
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Robust Fragments-based Tracking using the Integral Histogram
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Incremental Learning for Robust Visual Tracking
International Journal of Computer Vision
Kernel Codebooks for Scene Categorization
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
Soft Measure of Visual Token Occurrences for Object Categorization
CAIP '09 Proceedings of the 13th International Conference on Computer Analysis of Images and Patterns
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Discriminative nonorthogonal binary subspace tracking
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Bag of features (BoF) provides an effective and efficient representation for object tracking in video sequences. However, hard assignment used in BoF generation inevitably brings in quantization errors, which may lead to inaccuracy even failure in tracking. In this paper, we propose a novel soft-assigned bag of features tracking approach (SABoF), which can significantly reduce the influence of quantization errors and obtain more accurate and stable tracking results. We initialize tracking by specifying the tracked object and constructing the codebook. Then, we represent each candidate target with soft-assigned BoF and measure its similarity to the tracked object. The most similar candidate target in each frame is selected as the tracked result. To improve tracking performance, we also refine the tracking results by combining incremental PCA tracking. The proposed approach is evaluated on the challenging video sequences from CAVIAR dataset. Experiments show our approach outperforms current dominant methods in complex conditions.