Automated lip-synch and speech synthesis for character animation
CHI '87 Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface
Viseme Recognition Experiment Using Context Dependent Hidden Markov Models
IDEAL '02 Proceedings of the Third International Conference on Intelligent Data Engineering and Automated Learning
MikeTalk: A Talking Facial Display Based on Morphing Visemes
CA '98 Proceedings of the Computer Animation
Using virtual characters as TV presenters
Edutainment'07 Proceedings of the 2nd international conference on Technologies for e-learning and digital entertainment
High-realistic and flexible virtual presenters
AMDO'10 Proceedings of the 6th international conference on Articulated motion and deformable objects
Hi-index | 0.00 |
Nowadays, the presence of virtual characters is less and less surprising in daily life However, there is a lack of resources and tools available in the area of visual speech technologies for minority languages In this paper we present an application to animate in real time virtual characters from live speech in Basque To get a realistic face animation, the lips must be synchronized with the audio To accomplish this, we have compared different methods for obtaining the final visemes through HMM based speech recognition techniques Finally, the implementation of a real prototype has proven the feasibility to obtain a quite natural animation in real time with a minimum amount of training data.