Speech Driven MPEG-4 Based Face Animation via Neural Network

  • Authors:
  • Yiqiang Chen;Wen Gao;Zhaoqi Wang;Li Zuo

  • Affiliations:
  • -;-;-;-

  • Venue:
  • PCM '01 Proceedings of the Second IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, some clustering and machine learning methods are combined together to learn the correspondence between speech acoustic and MPEG-4 based face animation parameters. The features of audio and image sequences can be extracted from the large recorded audio-visual database. The face animation parameter (FAP) sequences can be computed and then clustered to FAP patterns. An artificial neural network (ANN) was trained to map the linear predictive coefficients (LPC) and some prosodic features of an individual's natural speech to FAP patterns. The performance of our system shows that the proposed learning algorithm is suitable, which can greatly improve the realism of real time face animation during speech.