Expressive audio-visual speech: Research Articles

  • Authors:
  • Elisabetta Bevacqua;Catherine Pelachaud

  • Affiliations:
  • -;-

  • Venue:
  • Computer Animation and Virtual Worlds - Special Issue: The Very Best Papers from CASA 2004
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We aim at the realization of an Embodied Conversational Agent able to interact naturally and emotionally with user. In particular, the agent should behave expressively. Specifying for a given emotion, its corresponding facial expression will not produce the sensation of expressivity. To do so, one needs to specify parameters such as intensity, tension, movement property. Moreover, emotion affects also lip shapes during speech. Simply adding the facial expression of emotion to the lip shape does not produce lip readable movement. In this paper we present a model based on real data from a speaker on which was applied passive markers. The real data covers natural speech as well as emotional speech. We present an algorithm that determines the appropriate viseme and applies coarticulation and correlation rules to consider the vocalic and the consonantal contexts as well as muscular phenomena such as lip compression and lip stretching. Expressive qualifiers are then used to modulate the expressivity of lip movement. Our model of lip movement is applied on a 3D facial model compliant with MPEG-4 standard. Copyright © 2004 John Wiley & Sons, Ltd.