Quantitative association of vocal-tract and facial behavior
Speech Communication - Special issue on auditory-visual speech processing
Robust automatic speech recognition with missing and unreliable acoustic data
Speech Communication
Redundancy reduction for computational audition, a unifying approach
Redundancy reduction for computational audition, a unifying approach
Computational Auditory Scene Analysis: Principles, Algorithms, and Applications
Computational Auditory Scene Analysis: Principles, Algorithms, and Applications
Asynchrony modeling for audio-visual speech recognition
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Journal on Image and Video Processing - Anthropocentric Video Analysis: Tools and Applications
Hi-index | 0.00 |
The paper presents a robust audiovisual speech recognition technique called audiovisual speech fragment decoding. The technique addresses the challenge of recognizing speech in the presence of competing nonstationary noise sources. It employs two stages. First, an acoustic analysis decomposes the acoustic signal into a number of spectro-teporall fragments. Second, audiovisual speech models are used to select fragments belonging to the target speech source. The approach is evaluated on a small vocabulary simultaneous speech recognition task in conditions that promote two contrasting types of masking: energetic masking caused by the energy of the masker utterance swamping that of the target, and informational masking, caused by similarity between the target and masker making it difficult to selectively attend to the correct source. Results show that the system is able to use the visual cues to reduce the effects of both types of masking. Further, whereas recovery from energetic masking may require detailed visual information (i.e., sufficient to carry phonetic content), release from informational masking can be achieved using very crude visual representations that encode little more than the timing of mouth opening and closure.