Variance reduction techniques in particle-based visual contour tracking
Pattern Recognition
An incremental Bhattacharyya dissimilarity measure for particle filtering
Pattern Recognition
Dynamic multi-cue tracking with detection responses association
Proceedings of the international conference on Multimedia
Visual tracking using the Earth Mover's Distance between Gaussian mixtures and Kalman filtering
Image and Vision Computing
A hierarchical feature fusion framework for adaptive visual tracking
Image and Vision Computing
Tracking objects using shape context matching
Neurocomputing
Visual tracking by adaptive kalman filtering and mean shift
SETN'10 Proceedings of the 6th Hellenic conference on Artificial Intelligence: theories, models and applications
A probabilistic integrated object recognition and tracking framework
Expert Systems with Applications: An International Journal
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
A large margin framework for single camera offline tracking with hybrid cues
Computer Vision and Image Understanding
A novel framework for motion segmentation and tracking by clustering incomplete trajectories
Computer Vision and Image Understanding
Visual tracking via adaptive tracker selection with multiple features
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Hough-based tracking of non-rigid objects
Computer Vision and Image Understanding
Hi-index | 0.15 |
We propose a new technique for fusing multiple cues to robustly segment an object from its background in video sequences that suffer from abrupt changes of both illumination and position of the target. Robustness is achieved by the integration of appearance and geometric object features and by their estimation using Bayesian filters, such as Kalman or particle filters. In particular, each filter estimates the state of a specific object feature, conditionally dependent on another feature estimated by a distinct filter. This dependence provides improved target representations, permitting to segment it out from the background even in non-stationary sequences. Considering that the procedure of the Bayesian filters may be described by a 'hypotheses generation - hypotheses correction- strategy, the major novelty of our methodology compared to previous approaches is that the mutual dependence between filters is considered during the feature observation, i.e, into the 'hypotheses correction' stage,instead of considering it when generating the hypotheses. This proves to be much more effective in terms of accuracy and reliability. The proposed method is analytically justified and applied to develop a robust tracking system that adapts online and simultaneously the colorspace where the image points are represented, the color distributions, the contour of the object and its bounding box. Results with synthetic data and real video sequences demonstrate the robustness and versatility of our method.