Mimicing 3D transformations of emotional stylised animation with minimal 2D input
Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia
Manifold analysis of facial gestures for face recognition
WBMA '03 Proceedings of the 2003 ACM SIGMM workshop on Biometrics methods and applications
Human computer intelligent interaction using augmented cognition and emotional intelligence
ICVR'07 Proceedings of the 2nd international conference on Virtual reality
Hi-index | 0.00 |
A facial analysis-synthesis framework based on a concise set of local, independently actuated, Co-articulation Regions (CR) is presented for the control of 2D animated characters. CR's are parameterized by muscle actuations and thereby provide a physically meaningful description of face state that is easily abstracted to higher-level descriptions of facial expression. Independent component analysis on a set of training images acquired from an actor is used to characterize the appearance space of each CR. Within this framework, actor-independent face reconstruction databases can be created by an artist or extracted from video sequences. In addition, the muscle parameter values may be used to drive any similarly parameterized 3D facial model. The flexibility afforded by such a methodology is demonstrated with applications to 2D facial animation control and sample based videosynthesis. The analysis runs in real-time on modest consumer hardware.