Energetic and informational masking effects in an audiovisual speech recognition system

  • Authors:
  • Jon Barker;Xu Shao

  • Affiliations:
  • Department of Computer Science, University of Sheffield, Sheffield, UK;Nuance Communication, Inc., Berkshire, UK and Department of Computer Science, University of Sheffield, Sheffield, UK

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper presents a robust audiovisual speech recognition technique called audiovisual speech fragment decoding. The technique addresses the challenge of recognizing speech in the presence of competing nonstationary noise sources. It employs two stages. First, an acoustic analysis decomposes the acoustic signal into a number of spectro-teporall fragments. Second, audiovisual speech models are used to select fragments belonging to the target speech source. The approach is evaluated on a small vocabulary simultaneous speech recognition task in conditions that promote two contrasting types of masking: energetic masking caused by the energy of the masker utterance swamping that of the target, and informational masking, caused by similarity between the target and masker making it difficult to selectively attend to the correct source. Results show that the system is able to use the visual cues to reduce the effects of both types of masking. Further, whereas recovery from energetic masking may require detailed visual information (i.e., sufficient to carry phonetic content), release from informational masking can be achieved using very crude visual representations that encode little more than the timing of mouth opening and closure.