Audio-visual speech recognition using depth information from the Kinect in noisy video conditions

  • Authors:
  • Georgios Galatas;Gerasimos Potamianos;Fillia Makedon

  • Affiliations:
  • University of Texas at Arlington, Arlington, Texas and Inst. of Informatics & Telecom, Athens, Greece;University of Thessaly, Volos, Greece and Inst. of Informatics & Telecom, Athens, Greece;University of Texas at Arlington, Arlington, Texas

  • Venue:
  • Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we build on our recent work, where we successfully incorporated facial depth data of a speaker captured by the Microsoft Kinect device, as a third data stream in an audio-visual automatic speech recognizer. In particular, we focus our interest on whether the depth stream provides sufficient speech information that can improve system robustness to noisy audio-visual conditions, thus studying system operation beyond the traditional scenarios, where noise is applied to the audio signal alone. For this purpose, we consider four realistic visual modality degradations at various noise levels, and we conduct small-vocabulary recognition experiments on an appropriate, previously collected, audiovisual database. Our results demonstrate improved system performance due to the depth modality, as well as considerable accuracy increase, when using both the visual and depth modalities over audio only speech recognition.