Monocular 3D tracking of articulated human motion in silhouette and pose manifolds
Journal on Image and Video Processing - Anthropocentric Video Analysis: Tools and Applications
On Bin Configuration of Shape Context Descriptors in Human Silhouette Classification
ACIVS '08 Proceedings of the 10th International Conference on Advanced Concepts for Intelligent Vision Systems
Vision-based human pose estimation for pervasive computing
AMC '09 Proceedings of the 2009 workshop on Ambient media computing
Silhouette representation and matching for 3D pose discrimination - A comparative study
Image and Vision Computing
Discriminative human action recognition in the learned hierarchical manifold space
Image and Vision Computing
View-independent human action recognition by action hypersphere in nonlinear subspace
PCM'07 Proceedings of the multimedia 8th Pacific Rim conference on Advances in multimedia information processing
View invariant activity recognition with manifold learning
ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part II
3D human pose recovery from image by efficient visual feature selection
Computer Vision and Image Understanding
Human pose estimation using exemplars and part based refinement
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part II
LampTop: touch detection for a projector-camera system based on shape classification
Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces
Hi-index | 0.00 |
Automatically recovering human poses from visual input is useful but challenging due to variations in image space and the high dimensionality of the pose space. In this paper, we assume that a human silhouette can be extracted from monocular visual input. We compare three shape descriptors that are used in the encoding of silhouettes: Fourier descriptors, shape contexts and Hu moments. An examplebased approach is taken to recover upper body poses from these descriptors. We perform experiments with deformed silhouettes to test each descriptor's robustness against variations in body dimensions, viewpoint and noise. It is shown that Fourier descriptors and shape context histograms outperform Hu moments for all deformations.