Directing an animated scene with autonomous actors
The Visual Computer: International Journal of Computer Graphics - Special issue on computer animation 1989/90
Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence
Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
An open architecture for comic actor animation
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
A social-psychological model for synthetic actors
AGENTS '98 Proceedings of the second international conference on Autonomous agents
PETEEI: a PET with evolving emotional intelligence
Proceedings of the third annual conference on Autonomous Agents
Artificial life for computer graphics
Communications of the ACM
Animation control for real-time virtual humans
Communications of the ACM
IEEE Computer Graphics and Applications
Real-Time Animation of Realistic Virtual Humans
IEEE Computer Graphics and Applications
Analyzing Facial Expressions for Virtual Conferencing
IEEE Computer Graphics and Applications
Autonomous Behavior Control of Virtual Actors Based on the AIR Model
CA '97 Proceedings of the Computer Animation
Layered Modular Action Control for Communicative Humanoids
CA '97 Proceedings of the Computer Animation
Emotional posturing: a method towards achieving emotional figure animation
CA '97 Proceedings of the Computer Animation
Hi-index | 0.00 |
Computer animation has come a long way during the last decade and is now capable of producing near-realistic rendered 3D computer graphics models of expressive, talking, acting humanoids and other characters inhabiting virtual worlds. However, the component of work that needs to be done by animators and artists in producing these synthetic character performances is quite significant. In this paper, we present an expert system based on fuzzy knowledge bases that helps in moving towards automating the task of animating virtual human heads and faces. Our Virtual Actor (Vactor) framework is based on several subsystems that use mainly fuzzy and a minor degree of non-fuzzy linguistic rules to teach virtual actors to know the emotions and gestures to use in different situations. Theories of emotion, personality, dialogue, and acting, as well as empirical evidence are incorporated into our framework and knowledge bases to produce convincing results.