A muscle model for animation three-dimensional facial expression
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Computer facial animation
Cognitive modeling: knowledge, reasoning and planning for intelligent characters
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Head shop: generating animated head models with anatomical structure
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
FacEMOTE: qualitative parametric modifiers for facial animations
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Computer generated animation of faces
ACM '72 Proceedings of the ACM annual conference - Volume 1
MPEG-4: A Multimedia Standard for the Third Millennium, Part 1
IEEE MultiMedia
Prototyping and Transforming Facial Textures for Perception Research
IEEE Computer Graphics and Applications
MikeTalk: A Talking Facial Display Based on Morphing Visemes
CA '98 Proceedings of the Computer Animation
Making Discours Visible: Coding and Animating Conversational Facial Displays
CA '02 Proceedings of the Computer Animation
Computers in Entertainment (CIE) - SPECIAL ISSUE: Games
Hi-index | 0.00 |
In this paper, we describe a modular multi-dimensional parameter space for real-time face game-based animation. Faces are our most expressive communication tools. Therefore a synthetic facial creation and animation system should have its own tailored authoring environment rather than using general purpose tools from image, 2D and 3D animation. This environment would take advantage of a knowledge space of faces types, expressions, and behavior, encoding known facial knowledge and meaning into a comprehensive, intuitive facial language and set of user tools. Since faces and face expression work on so many cognitive levels, we propose a multi-dimension parameter space called FaceSpace as the basic face model, and a comprehensive authoring environment based on this model. We describe the underlying mechanisms of our environment, and also demonstrate its early game applications and content process.