Nonlinear Generative Models for Dynamic Shape and Dynamic Appearance

  • Authors:
  • Ahmed Elgammal

  • Affiliations:
  • Rutgers University, Piscataway, NJ

  • Venue:
  • CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 12 - Volume 12
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Our objective is to learn representations for the shape and the appearance of moving (dynamic) objects that supports tasks such as synthesis, pose recovery, reconstruction and tracking. In this paper we introduce a framework that aim to learn a landmark-free correspondence-free global representations of dynamic appearance manifolds. We use nonlinear dimensionality reduction to achieve an embedding of the global deformation manifold that preserves the geometric structure of the manifold. Given such embedding, a nonlinear mapping is learned from the embedding space into the visual input space. Therefore, any visual input is represented by a linear combination of nonlinear bases functions centered along the manifold in the embedding space. We also show how approximate solution for the inverse mapping can be obtained in a closed form which facilitate recovery of the intrinsic body configuration. We use the framework to learn the gait manifold as an example of a dynamic shape manifold, as well as to learn the manifolds for some simple gestures and facial expressions as examples of dynamic appearance manifolds.