How to train your avatar: a data driven approach to gesture generation

  • Authors:
  • Chung-Cheng Chiu;Stacy Marsella

  • Affiliations:
  • University of Southern California, Institute for Creative Technologies, Playa Vista, CA;University of Southern California, Institute for Creative Technologies, Playa Vista, CA

  • Venue:
  • IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability to gesture is key to realizing virtual characters that can engage in face-to-face interaction with people. Many applications take an approach of predefining possible utterances of a virtual character and building all the gesture animations needed for those utterances. We can save effort on building a virtual human if we can construct a general gesture controller that will generate behavior for novel utterances. Because the dynamics of human gestures are related to the prosody of speech, in this work we propose a model to generate gestures based on prosody. We then assess the naturalness of the animations by comparing them against human gestures. The evaluation results were promising, human judgments show no significant difference between our generated gestures and human gestures and the generated gestures were judged as significantly better than real human gestures from a different utterance.