Computer facial animation
Hi-index | 0.00 |
This paper presents a method of automatic lip-synchronization driven purely by an analysis of the underlying speech. Through the use of linear prediction, small segments of speech can be classified into phonemes and then mapped to a corresponding viseme. This sequence of viseme matches is then weighted based on previous matches in the stream in order to alleviate some of the problem caused by the co-articulation effect. The result is a very recognizable, fully automatic, relatively speaker system of lip-synchronization.