Speech driven head motion synthesis based on a trajectory model
ACM SIGGRAPH 2007 posters
Learning a model of speaker head nods using gesture corpora
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Decimation of human face model for real-time animation in intelligent multimedia systems
Multimedia Tools and Applications
A nonparametric regression model for virtual humans generation
Multimedia Tools and Applications
Head motions during dialogue speech and nod timing control in humanoid robots
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
A realistic, virtual head for human-computer interaction
Interacting with Computers
Evaluating models of speaker head nods for virtual agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Towards a comprehensive 3D dynamic facial expression database
SSIP '09/MIV'09 Proceedings of the 9th WSEAS international conference on signal, speech and image processing, and 9th WSEAS international conference on Multimedia, internet & video technologies
How to train your avatar: a data driven approach to gesture generation
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Modeling speaker behavior: a comparison of two approaches
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Virtual character performance from speech
Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Vers des Agents Conversationnels Animés Socio-Affectifs
Proceedings of the 25ième conférence francophone on l'Interaction Homme-Machine
Hi-index | 0.00 |
Rigid head motion is a gesture that conveys important nonverbal information in human communication, and hence it needs to be appropriately modeled and included in realistic facial animations to effectively mimic human behaviors. In this paper, head motion sequences in expressive facial animations are analyzed in terms of their naturalness and emotional salience in perception. Statistical measures are derived from an audiovisual database, comprising synchronized facial gestures and speech, which revealed characteristic patterns in emotional head motion sequences. Head motion patterns with neutral speech significantly differ from head motion patterns with emotional speech in motion activation, range, and velocity. The results show that head motion provides discriminating information about emotional categories. An approach to synthesize emotional head motion sequences driven by prosodic features is presented, expanding upon our previous framework on head motion synthesis. This method naturally models the specific temporal dynamics of emotional head motion sequences by building hidden Markov models for each emotional category (sadness, happiness, anger, and neutral state). Human raters were asked to assess the naturalness and the emotional content of the facial animations. On average, the synthesized head motion sequences were perceived even more natural than the original head motion sequences. The results also show that head motion modifies the emotional perception of the facial animation especially in the valence and activation domain. These results suggest that appropriate head motion not only significantly improves the naturalness of the animation but can also be used to enhance the emotional content of the animation to effectively engage the users