Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents

  • Authors:
  • Justine Cassell;Catherine Pelachaud;Norman Badler;Mark Steedman;Brett Achorn;Tripp Becket;Brett Douville;Scott Prevost;Matthew Stone

  • Affiliations:
  • Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania;Department of Computer & Information Science, University of Pennsylvania

  • Venue:
  • SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
  • Year:
  • 1994

Quantified Score

Hi-index 0.04

Visualization

Abstract

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.