Machine Learning
"GrabCut": interactive foreground extraction using iterated graph cuts
ACM SIGGRAPH 2004 Papers
Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods
International Journal of Computer Vision
Beyond pixels: exploring new representations and applications for motion analysis
Beyond pixels: exploring new representations and applications for motion analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hough-based tracking of non-rigid objects
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Structured class-labels in random forests for semantic image labelling
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Real-time compressive tracking
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Hi-index | 0.00 |
In conventional online learning based tracking studies, fixed-shape appearance modeling is often incorporated for training samples generation, as it is simple and convenient to be applied. However, for more general non-rigid and articulated object, this strategy may regard some background areas as foreground, which is likely to deteriorate the learning process. Recently published works utilize more than one patches to represent non-rigid object with foreground object segmentation, but most of these segmentation for target representation are performed only in single frame manner. Since the motion information between the consecutive frames was not considered by these approaches, when the backgrounds are similar to the target, accurate segmentation is hard to be achieved. In this work, we propose a novel model for non-rigid object segmentation by incorporating consecutive gradients flow between pair-wise frames into a Gibbs energy function. With help from motion information, the irregular target areas can be segmented more accurately during precise boundary convergence. The proposed segmentation model is incorporated into a semi-supervised online tracking framework for training samples generation. We test the proposed tracking on challenging videos involving heavy intrinsic variations and occlusions. As a result, the experiments demonstrate a significant improvement in tracking accuracy and robustness in comparison with other state-of-art tracking works.