Motion learning-based framework for unarticulated shape animation

  • Authors:
  • Chao Jin;Thomas Fevens;Shuo Li;Sudhir Mudur

  • Affiliations:
  • Concordia University, Department of Computer Science and Software Engineering, 1455 De Maisonneuve Blvd. West, H3G 1M8, Montreal, QC, Canada;Concordia University, Department of Computer Science and Software Engineering, 1455 De Maisonneuve Blvd. West, H3G 1M8, Montreal, QC, Canada;GE Healthcare, 700 Collip Circle, N6G 4X8, London, ON, Canada;Concordia University, Department of Computer Science and Software Engineering, 1455 De Maisonneuve Blvd. West, H3G 1M8, Montreal, QC, Canada

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a framework for generating animation sequences while maintaining desirable physical properties in a deformable shape. The framework consists of three important processes. Firstly, considering the given key pose configurations in the form of unarticulated meshes in high dimensional space, we cast our motion in low dimensional space using the unsupervised learning method of locally linear embedding (LLE). Corresponding to each point in LLE space, we can reconstruct the in-between pose using generalized radial basis functions. Next we create a map in the LLE space of the values for the different physical properties of the mesh, for example area, volume, etc. Finally, a probability distribution function in LLE space helps us rapidly choose the required number of in-between poses with desired physical properties. A significant advantage of this framework is that it relieves the animator the tedium of having to carefully provide key poses to suit the interpolant.