Anatomy-based face reconstruction for animation using multi-layer deformation

  • Authors:
  • Yu Zhang;Terence Sim;Chew Lim Tan;Eric Sung

  • Affiliations:
  • Department of Computer Science, School of Computing, National University of Singapore, Singapore 117543, Singapore;Department of Computer Science, School of Computing, National University of Singapore, Singapore 117543, Singapore;Department of Computer Science, School of Computing, National University of Singapore, Singapore 117543, Singapore;School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore

  • Venue:
  • Journal of Visual Languages and Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel multi-layer deformation (MLD) method for reconstructing animatable, anatomy-based human facial models with minimal manual intervention. Our method is based on adapting a prototype model with the multi-layer anatomical structure to the acquired range data in an ''outside-in'' manner: deformation applied to the external skin layer is propagated along with the subsequent transformations to the muscles, with the final effect of warping the underlying skull. The prototype model has a known topology and incorporates a multi-layer structure hierarchy of physically based skin, muscles, and skull. In the MLD, a global alignment is first carried out to adapt the position, size, and orientation of the prototype model to align it with the scanned data based on measurements between a subset of specified anthropometric landmarks. In the skin layer adaptation, the generic skin mesh is represented as a dynamic deformable model which is subjected to internal force stemming from the elastic properties of the surface and external forces generated by input data points and features. A fully automated approach has been developed for adapting the underlying muscle layer which consists of three types of physically based facial muscle models. MLD deforms a set of automatically generated skull feature points according to the adapted external skin and muscle layers. The new positions of these feature points are then used to drive a volume morphing applied to the template skull model. We demonstrate our method by applying it to generate a wide range of different facial models on which various facial expressions are animated.