Reasoning about naming systems
ACM Transactions on Programming Languages and Systems (TOPLAS)
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
The effects of animated characters on anxiety, task performance, and evaluations of user interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Human conversation as a system framework: designing embodied conversational agents
Embodied conversational agents
Hidden Markov Models for Speech Recognition
Hidden Markov Models for Speech Recognition
Generation of Facial Expressions from Emotion Using a Fuzzy Rule Based System
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Text to Speech Synthesis: New Paradigms and Advances
Text to Speech Synthesis: New Paradigms and Advances
Xface: open source toolkit for creating 3d faces of an embodied conversational agent
SG'05 Proceedings of the 5th international conference on Smart Graphics
Evaluating humanoid synthetic agents in e-retail applications
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
On the development of a talking head system based on the use of PDE-based parametic surfaces
Transactions on computational science XII
Hi-index | 0.00 |
One of the research goals in the human-computer interaction community is to build believable Embodied Conversational Agents, that is, agents able to communicate complex information with human-like expressiveness and naturalness. Since emotions play a crucial role in human communication and most of them are expressed through the face, having more believable ECAs implies to give them the ability of displaying emotional facial expressions.This paper presents a system based on Hidden Markov Models (HMMs) for the synthesis of emotional facial expressions during speech. The HMMs were trained on a set of emotion examples in which a professional actor uttered Italian non-sense words, acting various emotional facial expressions with different intensities.The evaluation of the experimental results, performed comparing the "synthetic examples" (generated by the system) with a reference "natural example" (one of the actor's examples) in three different ways, shows that HMMs for emotional facial expressions synthesis have some limitations but are suitable to make a synthetic Talking Head more expressive and realistic.