Proceedings of the 26th annual conference on Computer graphics and interactive techniques
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A fast learning algorithm for deep belief nets
Neural Computation
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
Analysis of Head Gesture and Prosody Patterns for Prosody-Driven Head-Gesture Animation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Factored conditional restricted Boltzmann Machines for modeling motion style
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Real-time prosody-driven synthesis of body language
ACM SIGGRAPH Asia 2009 papers
Seeing is believing: body motion dominates in multisensory conversations
ACM SIGGRAPH 2010 papers
ACM SIGGRAPH 2010 papers
A style controller for generating virtual human behaviors
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Nonverbal behavior generator for embodied conversational agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis
IEEE Transactions on Audio, Speech, and Language Processing
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Guest Editorial: Gesture and speech in interaction: An overview
Speech Communication
Gesture synthesis adapted to speech emphasis
Speech Communication
Hi-index | 0.00 |
The ability to gesture is key to realizing virtual characters that can engage in face-to-face interaction with people. Many applications take an approach of predefining possible utterances of a virtual character and building all the gesture animations needed for those utterances. We can save effort on building a virtual human if we can construct a general gesture controller that will generate behavior for novel utterances. Because the dynamics of human gestures are related to the prosody of speech, in this work we propose a model to generate gestures based on prosody. We then assess the naturalness of the animations by comparing them against human gestures. The evaluation results were promising, human judgments show no significant difference between our generated gestures and human gestures and the generated gestures were judged as significantly better than real human gestures from a different utterance.