Text-to-Visual Speech Synthesis for General Objects Using Parameter-Based Lip Models

  • Authors:
  • Ze-Jing Chuang;Chung-Hsien Wu

  • Affiliations:
  • -;-

  • Venue:
  • PCM '02 Proceedings of the Third IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents four parameter-based 3-dimension (3D) lip models for Chinese text-to-visual speech synthesis. This model can be applied to general objects with lip-like meshes. Three main components will be described in this paper: the generation of weighted parameter sequence of lip motions for each Mandarin syllable, the definition and construction of parameter-based lip models, and the synchronization of speech and facial animation. The result shows that the system produces a promising and encouraging speech and facial animation output.