Towards Facial Gestures Generation by Speech Signal Analysis Using HUGE Architecture

  • Authors:
  • Goranka Zoric;Karlo Smid;Igor S. Pandzic

  • Affiliations:
  • Department of Telecommunications, Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, HR-10 000;Ericsson Nikola Tesla, Zagreb, HR-10 002;Department of Telecommunications, Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, HR-10 000

  • Venue:
  • Multimodal Signals: Cognitive and Algorithmic Issues
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In our current work we concentrate on finding correlation between speech signal and occurrence of facial gestures. Motivation behind this work is computer-generated human correspondent, ECA. In order to have a believable human representative it is important for an ECA to implement facial gestures in addition to verbal and emotional displays. Information needed for generation of facial gestures is extracted from speech prosody by analyzing natural speech in real-time. This work is based on the previously developed HUGE architecture for statistically-based facial gesturing and extends our previous work on automatic real-time lip sync.