Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Stochastic Tracking of 3D Human Figures Using 2D Image Motion
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part II
Practical parameterization of rotations using the exponential map
Journal of Graphics Tools
Fast Pose Estimation with Parameter-Sensitive Hashing
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Learning a kernel matrix for nonlinear dimensionality reduction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Discriminative Density Propagation for 3D Human Motion Estimation
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Monocular Human Motion Capture with a Mixture of Regressors
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Recovering 3D Human Pose from Monocular Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning Joint Top-Down and Bottom-up Processes for 3D Visual Inference
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Learning and Inference of 3D Human Poses from Gaussian Mixture Modeled Silhouettes
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 02
Locality preserving CCA with applications to data visualization and pose estimation
Image and Vision Computing
Regression on manifolds using kernel dimension reduction
Proceedings of the 24th international conference on Machine learning
Learning Generative Models for Multi-Activity Body Pose Estimation
International Journal of Computer Vision
Gaussian process latent variable models for human pose estimation
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
Inferring 3D body pose from silhouettes using activity manifold learning
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
3D human pose from silhouettes by relevance vector regression
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Central Subspace Dimensionality Reduction Using Covariance Operators
IEEE Transactions on Pattern Analysis and Machine Intelligence
Latent gaussian mixture regression for human pose estimation
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part III
Supervised local subspace learning for continuous head pose estimation
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
A Least-Squares Framework for Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Discriminative approaches for human pose estimation model the functional mapping, or conditional distribution, between image features and 3D poses. Learning such multi-modal models in high dimensional spaces, however, is challenging with limited training data; often resulting in over-fitting and poor generalization. To address these issues Latent Variable Models (LVMs) have been introduced. Shared LVMs learn a low dimensional representation of common causes that give rise to both the image features and the 3D pose. Discovering the shared manifold structure can, in itself, however, be challenging. In addition, shared LVM models are often non-parametric, requiring the model representation to be a function of the training set size. We present a parametric framework that addresses these shortcomings. In particular, we jointly learn latent spaces for both image features and 3D poses by maximizing the non-linear dependencies in the projected latent space, while preserving local structure in the original space; we then learn a multi-modal conditional density between these two low-dimensional spaces in the form of Gaussian Mixture Regression. With this model we can address the issue of over-fitting and generalization, since the data is denser in the learned latent space, as well as avoid the need for learning a shared manifold for the data. We quantitatively compare the performance of the proposed method to several state-of-the-art alternatives, and show that our method gives a competitive performance.