Audio-visual speech understanding in simulated telephony applications by individuals with hearing loss

  • Authors:
  • Linda Kozma-Spytek;Paula Tucker;Christian Vogler

  • Affiliations:
  • Gallaudet University, Washington, DC;Gallaudet University, Washington, DC;Gallaudet University, Washington, DC

  • Venue:
  • Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a study into the effects of the addition of a video channel, video frame rate, and audio-video synchrony, on the ability of people with hearing loss to understand spoken language during video telephone conversations. Analysis indicates that higher frame rates result in a significant improvement in speech understanding, even when audio and video are not perfectly synchronized. At lower frame rates, audio-video synchrony is critical: if the audio is perceived 100 ms ahead of video, understanding drops significantly; if on the other hand the audio is perceived 100 ms behind video, understanding does not degrade versus perfect audio-video synchrony. These findings are validated in extensive statistical analysis over two within-subjects experiments with 24 and 22 participants, respectively.