CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Elliptical Head Tracking Using Intensity Gradients and Color Histograms
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Integral Histogram: A Fast Way To Extract Histograms in Cartesian Spaces
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Democratic Integration: Self-Organized Integration of Adaptive Cues
Neural Computation
Robust Fragments-based Tracking using the Integral Histogram
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Sequential Monte Carlo tracking by fusing multiple cues in video sequences
Image and Vision Computing
Dynamic Integration of Generalized Cues for Person Tracking
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
Semi-supervised On-Line Boosting for Robust Tracking
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Mean Shift tracking with multiple reference color histograms
Computer Vision and Image Understanding
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Efficient GPU implementation of the integral histogram
ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
In this paper, we address the issue of part-based tracking by proposing a new fragments-based tracker. The proposed tracker enhances the recently suggested FragTrack algorithm to employ an adaptive cue integration scheme. This is done by embedding the original tracker into a particle filter framework, associating a reliability value to each fragment that describes a different part of the target object and dynamically adjusting these reliabilities at each frame with respect to the current context. Particularly, the vote of each fragment contributes to the joint tracking result according to its reliability, and this allows us to achieve a better accuracy in handling partial occlusions and pose changes while preserving and even improving the efficiency of the original tracker. In order to demonstrate the performance and the effectiveness of the proposed algorithm we present qualitative and quantitative results on a number of challenging video sequences.