Automatic lip-synchronization using linear prediction of speech

  • Authors:
  • C. J. Kohnert;S. K. Semwal

  • Affiliations:
  • Department of computer Science, University of Colorado, Colorado Springs;Department of computer Science, University of Colorado, Colorado Springs

  • Venue:
  • SPPRA'06 Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a method of automatic lip-synchronization driven purely by an analysis of the underlying speech. Through the use of linear prediction, small segments of speech can be classified into phonemes and then mapped to a corresponding viseme. This sequence of viseme matches is then weighted based on previous matches in the stream in order to alleviate some of the problem caused by the co-articulation effect. The result is a very recognizable, fully automatic, relatively speaker system of lip-synchronization.