The visual analysis of human movement: a survey
Computer Vision and Image Understanding
Implicit Probabilistic Models of Human Motion for Synthesis and Tracking
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Bayesian Body Localization Using Mixture of Nonlinear Shape Models
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
3D People Tracking with Gaussian Process Dynamical Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
A survey of advances in vision-based human motion capture and analysis
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Three-Dimensional Shape Knowledge for Joint Image Segmentation and Pose Tracking
International Journal of Computer Vision
Vision-based human motion analysis: An overview
Computer Vision and Image Understanding
IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part II
Occlusion modeling by tracking multiple objects
Proceedings of the 29th DAGM conference on Pattern recognition
Nonparametric density estimation with adaptive, anisotropic kernels for human motion tracking
Proceedings of the 2nd conference on Human motion: understanding, modeling, capture and animation
Localised Mixture Models in Region-Based Tracking
Proceedings of the 31st DAGM Symposium on Pattern Recognition
3D human modeling from a single depth image dealing with self-occlusion
Multimedia Tools and Applications
Hi-index | 0.00 |
Self-occlusion is a common problem in silhouette based motion capture, which often results in ambiguous pose configurations. In most works this is compensated by a priori knowledge about the motion or the scene, or by the use of multiple cameras. Here we suggest to overcome this problem by splitting the surface model of the object and tracking the silhouette of each part rather than the whole object. The splitting can be done automatically by comparing the appearance of the different parts with the Jensen-Shannon divergence. Tracking is then achieved by maximizing the appearance differences of all involved parts and the background simultaneously via gradient descent. We demonstrate the improvements with tracking results from simulated and real world scenes.