The use of temporal speech and lip information for multi-modal speaker identification via multi-stream HMMs

  • Authors:
  • T. Wark;S. Sridharan;V. Chandran

  • Affiliations:
  • Speech Res. Lab., Queensland Univ. of Technol., Brisbane, Qld., Australia;-;-

  • Venue:
  • ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 04
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification.