Responsive listening behavior

  • Authors:
  • M. Gillies;X. Pan;M. Slater;J. Shawe-Taylor

  • Affiliations:
  • Department of Computing, Goldsmiths College, University of London, New Cross, London SE14, 6NW, UK.;-;-;-

  • Venue:
  • Computer Animation and Virtual Worlds
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Humans use their bodies in a highly expressive way during conversation, and animated characters that lack this form of non-verbal expression can seem stiff and unemotional. An important aspect of non-verbal expression is that people respond to each other's behavior and are highly attuned to picking up this type of response. This is particularly important for the feedback given while listening to some one speak. However, automatically generating this type of behavior is difficult as it is highly complex and subtle. This paper takes a data driven approach to generating interactive social behavior. Listening behavior is motion captured, together with the audio being listened to. These data are used to learn an animation model of the responses of one person to the other. This allows us to create characters that respond in real-time during a conversation with a real human. Copyright 2008 John Wiley & Sons, Ltd.