A local-motion-based probabilistic model for visual tracking
Pattern Recognition
Dynamic multi-cue tracking with detection responses association
Proceedings of the international conference on Multimedia
A two-stage dynamic model for visual tracking
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
SAMT'10 Proceedings of the 5th international conference on Semantic and digital media technologies
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
A large margin framework for single camera offline tracking with hybrid cues
Computer Vision and Image Understanding
Shape based appearance model for kernel tracking
Image and Vision Computing
Visual tracking via adaptive tracker selection with multiple features
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Persistent object tracking in road panoramic videos
PCM'12 Proceedings of the 13th Pacific-Rim conference on Advances in Multimedia Information Processing
Segmentation-based tracking by support fusion
Computer Vision and Image Understanding
Real-time visual tracking based on an appearance model and a motion mode
ICIC'13 Proceedings of the 9th international conference on Intelligent Computing Theories and Technology
Hi-index | 0.00 |
This paper presents a novel probabilistic approach to integrating multiple cues in visual tracking. We perform tracking in different cues by interacting processes. Each process is represented by a Hidden Markov Model, and these parallel processes are arranged in a chain topology. The resulting Linked Hidden Markov Models naturally allow the use of particle filters and Belief Propagation in a unified framework. In particular, a target is tracked in each cue by a particle filter, and the particle filters in different cues interact via a message passing scheme. The general framework of our approach allows a customized combination of different cues in different situations, which is desirable from the implementation point of view. Our examples selectively integrate four visual cues including color, edges, motion and contours. We demonstrate empirically that the ordering of the cues is nearly inconsequential, and that our approach is superior to other approaches such as Independent Integration and Hierarchical Integration in terms of flexibility and robustness.