Speech-assisted lip synchronization in audio-visual communications
ICIP '95 Proceedings of the 1995 International Conference on Image Processing (Vol.2)-Volume 2 - Volume 2
The Visual Computer: International Journal of Computer Graphics
Hi-index | 0.07 |
We propose a new speech communication system to convert oral motion images into speech. We call this system “the image input microphone.” It provides high security and is not affected by acoustic noise because it is not necessary to input the actual utterance. This system is especially promising as a speaking-aid system for people whose vocal cords are injured. Since this is a basic investigation of media conversion from image to speech, we focus on vowels, and conduct experiments on media conversion of vowels. The vocal-tract transfer function and the source signal for driving this filter are estimated from features of the lips. These features are extracted from oral images in B learning data set, then speech is synthesized by this filter inputted with an appropriate driving signal. The performance of this system is evaluated by hearing tests of synthesized speech. The mean recognition rate for the test data set was 76.8%. We also investigate the effects of practice by iterative listening. The mean recognition rate rises from 69.4% to over 90% after four tests over four days. Consequently, we conclude the proposed system has potential as a method of nonacoustic communication