Implementing expressive gesture synthesis for embodied conversational agents

  • Authors:
  • Björn Hartmann;Maurizio Mancini;Catherine Pelachaud

  • Affiliations:
  • Computer Science Department, Stanford University, Stanford, CA;LINC-LIA, University of Paris-8, Montreuil, France;LINC-LIA, University of Paris-8, Montreuil, France

  • Venue:
  • GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies that evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent.