A muscle model for animation three-dimensional facial expression
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
Communicative facial displays as a new conversational modality
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Specifying intonation from context for speech synthesis
Speech Communication
Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Adding animated presentation agents to the interface
Proceedings of the 2nd international conference on Intelligent user interfaces
A semantics of contrast and information structure for specifying intonation in spoken language generation
A structural model of the human face (graphics, animation, object representation)
A structural model of the human face (graphics, animation, object representation)
Assigning intonational features in synthesized spoken directions
ACL '88 Proceedings of the 26th annual meeting on Association for Computational Linguistics
A conversational agent as museum guide: design and evaluation of a real-world application
Lecture Notes in Computer Science
Hi-index | 0.00 |
Dialoging with a synthetic agent is a vast research topic to enhance user-interface friendliness. We present in this paper an on-going project on the simulation of a dialog situation between two synthetic agents. More particularly we focus our interest on finding the appropriate facial expressions of a speaker addressing to different types of listeners (tourist, employee, child, and so on) using various linguistic forms such as request, question, information. Communication between speaker and listener involves multimodal behaviors such as the choice of words, intonation and paralinguistic parameters for the vocal ones; facial expressions, gaze, gesture and body movements for the non-verbal ones. The choice of each individual behavior, their mutual interaction and synchronization produce the richness and subtility of human communication.In order to develop a system that computes automatically the appropriate facial and gaze behaviors corresponding to a communicative act for a given speaker and listener, our first step is to categorize facial expressions and gaze based on their communicative functions rather than on their appearance. The next step is to find inference rules that describe the "mental" process ongoing in the speaker while communicating with the listener. The rules take into account the power relation between speaker and listener and the beliefs the speaker has about the listener to constrain the choice of performative acts.