Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Animation of Synthetic Faces in MPEG-4
CA '98 Proceedings of the Computer Animation
Hi-index | 0.00 |
In this paper, some clustering and machine learning methods are combined together to learn the correspondence between speech acoustic and MPEG-4 based face animation parameters. The features of audio and image sequences can be extracted from the large recorded audio-visual database. The face animation parameter (FAP) sequences can be computed and then clustered to FAP patterns. An artificial neural network (ANN) was trained to map the linear predictive coefficients (LPC) and some prosodic features of an individual's natural speech to FAP patterns. The performance of our system shows that the proposed learning algorithm is suitable, which can greatly improve the realism of real time face animation during speech.