Visyllable-specific facial transition motion embedding and extraction

  • Authors:
  • Yuru Pei;Hongbin Zha

  • Affiliations:
  • Key Laboratory of Machine Perception, MOE, Peking University, Beijing, China;Key Laboratory of Machine Perception, MOE, Peking University, Beijing, China

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The visual facial appearances are important to the speaking perception. The effective and reasonable extraction of the transition motions between the keyframes is desirable to the facial speech animation. In this paper, we present the visyllable-specific transition motion embedding with the temporal extension of the Laplacian eigenmaps (TLE). By imposing the temporal constraints, the TLE-based embedding preserves the possible transitions between the keyshapes inside the visyllable sequence. Given the keyframe pair, the in-between transition motions can be extracted in the latent space by the shortest path searching algorithm. Our experiments demonstrate an effective engine for embedding and extracting the transition motions specific to the visyllables.