Model-based Animation of Coverbal Gesture

  • Authors:
  • Stefan Kopp;Ipke Wachsmuth

  • Affiliations:
  • -;-

  • Venue:
  • CA '02 Proceedings of the Computer Animation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Virtual conversational agents are supposed to combine speech with nonverbal modalities for intelligible and believeable utterances. However, the automatic synthesi of coverbal gestures still struggles with several problems like naturalness in procedurally generated animations, flexibility in pre-defined movements, and synchronization with speech. In thi paper, we focus on generating complex multimodal utterances including gesture and speech from XML-based descriptions of their overt form. We describe a coordination model that reproduces co-arcticulation and transition effects in both modalities. In particular, an efficient kinematic approach to creating gesture animations from shape specifications is presented, which provides fine adaptation to temporal constraint that are imposed by cross-modal synchrony.