Synthesizing multimodal utterances for conversational agents: Research Articles

  • Authors:
  • Stefan Kopp;Ipke Wachsmuth

  • Affiliations:
  • -;-

  • Venue:
  • Computer Animation and Virtual Worlds
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Conversational agents are supposed to combine speech with non-verbal modalities for intelligible multimodal utterances. In this paper, we focus on the generation of gesture and speech from XML-based descriptions of their overt form. An incremental production model is presented that combines the synthesis of synchronized gestural, verbal, and facial behaviors with mechanisms for linking them in fluent utterances with natural co-articulation and transition effects. In particular, an efficient kinematic approach for animating hand gestures from shape specifications is presented, which provides fine adaptation to temporal constraints that are imposed by cross-modal synchrony. Copyright © 2004 John Wiley & Sons, Ltd.