Computer facial animation
Facial animation (panel): past, present and future
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Retargetting motion to new characters
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Mapping a manifold of perceptual observations
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Communications of the ACM
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Real-Time Facial Animation based upon a Bank of 3D Facial Expressions
CA '98 Proceedings of the Computer Animation
Analysis of co-articulation regions for performance-driven facial animation: Research Articles
Computer Animation and Virtual Worlds
Face transfer with multilinear models
ACM SIGGRAPH 2005 Papers
Animating blendshape faces by cross-mapping motion capture data
I3D '06 Proceedings of the 2006 symposium on Interactive 3D graphics and games
eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Hi-index | 0.00 |
This paper describes expression space generation technology that enables animators to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In this system, approximately 2400 facial expression frames are used to generate facial expression space. In this paper, distance matrixes that present distances between facial characteristic points are used to show the state of an expression. The set of these distance matrixes is defined as facial expression space. However, this facial expression space is not space that can be transferred to one space or another in a straight line, when one expression changes to another. In this technology, the route for moving from one expression to another is approximately inferred from captured facial expression data. First, it is assumed that two expressions are close to each other when the distance between distance matrixes that show facial expression states is below a certain value. When two random facial expression states are connected with the set of a series of adjacent expressions, it is assumed that there is a route between the two expressions. It is further assumed that the shortest path between two facial expressions is the path when one expression moves to the other expression. Dynamic programming is used to find the shortest path between two facial expressions. The facial expression space, which is the set of these distance matrixes, is multidimensional space. The facial expression control of 3-dimensional avatars is carried out in real-time when animators navigate through facial expression space. In order to assist this task, multidimensional scaling is used for visualization in 2-dimensional space, and animators are told to control facial expressions when using this system. This paper evaluates the results of the experiment.