Nonlinear manifold learning for dynamic shape and dynamic appearance

  • Authors:
  • Ahmed Elgammal;Chan-Su Lee

  • Affiliations:
  • Department of Computer Science, Rutgers University, Piscataway, NJ, USA;Department of Computer Science, Rutgers University, Piscataway, NJ, USA

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Our objective is to learn representations for the shape and the appearance of moving (dynamic) objects that support tasks such as synthesis, pose recovery, reconstruction, and tracking. In this paper, we introduce a framework that aims to learn landmark-free correspondence-free global representations of dynamic appearance manifolds. We use nonlinear dimensionality reduction to achieve an embedding of the global deformation manifold which preserves the geometric structure of the manifold. Given such embedding, a nonlinear mapping is learned from the embedding space into the visual input space. Therefore, any visual input is represented by a linear combination of nonlinear bases functions centered along the manifold in the embedding space. We also show how approximate solution for the inverse mapping can be obtained in a closed form which facilitate recovery of the intrinsic body configuration. We use the framework to learn the gait manifold as an example of a dynamic shape manifold, as well as to learn the manifolds for some simple gestures and facial expressions as examples of dynamic appearance manifolds.