Combinatorial optimization: algorithms and complexity
Combinatorial optimization: algorithms and complexity
CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pictorial Structures for Object Recognition
International Journal of Computer Vision
Articulated Body Motion Capture by Stochastic Search
International Journal of Computer Vision
Recovering 3D Human Pose from Monocular Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tracking People by Learning Their Appearance
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Quantitative Evaluation of Video-based 3D Person Tracking
ICCCN '05 Proceedings of the 14th International Conference on Computer Communications and Networks
Shape context and chamfer matching in cluttered scenes
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
A tutorial on particle filters for online nonlinear/non-GaussianBayesian tracking
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
The 3D human body tracking from videos in unconstrained scenes is a challenging problem and has widespread applications. In this paper, we introduce a novel framework that incorporates the graph-based human limbs detection into the articulated Bayesian tracking. The 3D human body model with a hierarchical tree structure can describe human's movement by setting relevant parameters. Particle filter, which is the optimal Bayesian estimation, is used to predict the state of the 3D human pose. In order to compute the likelihood of particles, the pictorial structure model is introduced to detect the human body limbs from monocular uncalibrated images. Then the detected articulated body limbs are matched with each particle using shape contexts. Thus the 3D pose is recovered using a weighted sum of matching costs of all particles. Experimental results show our algorithm can accurately track the walking poses on very long video sequences.