3D facial expression editing based on the dynamic graph model
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Visyllable-specific facial transition motion embedding and extraction
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Hi-index | 0.00 |
Stylized synthesis of facial speech motions is central to facial animation. Most synthesis algorithms put emphasis on the reasonable concatenation of captured motion segments. The dynamic modeling of speech units, e.g. visemes and visyllables (the visual appearance of a syllable), has not drawn much attention. In this paper, we address the fundamental issues regarding the stylized dynamic modeling of visyllables. The decomposable generalized model is learnt for the stylized motion synthesis. The visyllable modeling includes two parts: (1) A dynamic model for each kind of visyllable that is learnt based on a Gaussian Process Dynamical Model; (2) A multilinear model based unified mapping between the high dimensional observation space and low dimensional latent space. The dynamic visyllable model embeds the high dimensional motion data, and constructs the dynamic mapping in the latent space simultaneously. To generalize the visyllable model from several instances, the mapping coefficient matrices are assembled to a tensor, which is decomposed into independent modes, e.g. identity and uttering styles. Therefore, with the linear combination of components in each mode, the novel stylized motions can be synthesized. Copyright © 2007 John Wiley & Sons, Ltd.