EM algorithms for PCA and SPCA
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Motion capture assisted animation: texturing and synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Machine Learning
Verbs and Adverbs: Multidimensional Motion Interpolation
IEEE Computer Graphics and Applications
Segmenting motion capture data into distinct behaviors
GI '04 Proceedings of the 2004 Graphics Interface Conference
Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces
ACM SIGGRAPH 2004 Papers
Automated extraction and parameterization of motions in large data sets
ACM SIGGRAPH 2004 Papers
Performance animation from low-dimensional control signals
ACM SIGGRAPH 2005 Papers
Geostatistical motion interpolation
ACM SIGGRAPH 2005 Papers
An efficient search algorithm for motion data using weighted PCA
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Mapping Algorithms for Real-Time Control of an Avatar Using Eight Sensors
Presence: Teleoperators and Virtual Environments
3D human pose from silhouettes by relevance vector regression
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Segment-based human motion compression
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Database techniques with motion capture
ACM SIGGRAPH 2007 courses
Temporal Nearest End-Effectors for Real-Time Full-Body Human Actions Recognition
AMDO '08 Proceedings of the 5th international conference on Articulated Motion and Deformable Objects
Full-body performance animation with Sequential Inverse Kinematics
Graphical Models
Mimesis Model from Partial Observations for a Humanoid Robot
International Journal of Robotics Research
Cyclic and non-cyclic gesture spotting and classification in real-time applications
AMDO'10 Proceedings of the 6th international conference on Articulated motion and deformable objects
Real-time classification of dynamic hand gestures from marker-based position data
Proceedings of the companion publication of the 2013 international conference on Intelligent user interfaces companion
Hi-index | 0.00 |
Motion capture data from human subjects exhibits considerable redundancy. In this paper, we propose novel methods for exploiting this redundancy. In particular, we set out to find a subset of motion-capture markers that are able to provide fast and high-quality predictions of the remaining markers. We then develop a model that uses this reduced marker set to predict the others. We demonstrate that this subset of original markers is sufficient to capture subtle variations in human motion.We take a data-driven modeling approach to learn piecewise local linear models from a marker-based training set. We first divide motion sequences into segments of low dimensionality. We then retrieve a feature vector from each of the motion segments and use these feature vectors as modeling primitives to cluster the segments into a hierarchy of local linear models via a divisive clustering method. The selection of an appropriate linear model for reconstruction of a full-body pose is determined automatically via a classifier driven by a reduced marker set. After offline training, our method can quickly reconstruct full-body human motion using a reduced marker set without storing and searching the large database. We also demonstrate our method's ability to generalize over a variety of motions from multiple subjects.