A piece-wise learning approach to 3D facial animation
ICWL'07 Proceedings of the 6th international conference on Advances in web based learning
3D facial expression synthesis from a single image using a model set
ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume 2
Hi-index | 0.00 |
In this paper, we present a feature-based approach to cloning facial expressions from an input face model to an output model, using predefined source key-models and the corresponding target key-models. Adopting a scattered data interpolation technique, our approach consists of two parts: analysis of face key-models and synthesis of facial expressions. In the analysis part carried out once at the beginning, key-models are segmented automatically into five regions, each containing one of five facial features, that is, eyes, cheeks, and the mouth, which give rise to five sets of source key-shapes and the corresponding sets of target key-shapes. Using the key-shapes of each source feature, those of the corresponding target feature are parameterized. In the synthesis part, given a sequence of face models comprising an input animation, five output features are obtained separately by blending their own target key-shapes. These separately produced features are combined to synthesize the output face model at each frame. Our feature-based approach enables cloning of diverse expressions including asymmetric ones convincingly with a small number of face key-models while exhibiting an on-line, real-time performance. Copyright © 2005 John Wiley & Sons, Ltd.