Audio-video automatic speech recognition: an example of improved performance through multimodal sensor input

  • Authors:
  • Roland Goecke

  • Affiliations:
  • Autonomous Systems and Sensing Technologies, National ICT Australia, Canberra, Australia and Australian National University, RSISE, Canberra, Australia

  • Venue:
  • MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

One of the advantages of multimodal HCI technology is the performance improvement that can be gained over conventional single-modality technology by employing complementary sensors in different modalities. Such information is particular useful in practical, real-world applications where the application's performance must be robust against all kinds of noise. An example is the domain of automatic speech recognition (ASR). Traditionally, ASR systems only use information from the audio modality. In the presence of acoustic noise, the performance drops quickly. However, it can and has been shown that incorporating additional visual speech information from the video modality improves the performance significantly, so that AV ASR systems can be employed in applications areas where audio-only ASR systems would fail, thus opening new application areas for ASR technology. In thispaper, a non-intrusive (no artificial markers), real-time 3D lip tracking system is presented as well as its application to AV ASR. The multivariate statistical analysis 'co-inertia analysis' is also shown, which offers improved numerical stability over other multivariate analyses even for small sample sizes.