A hybrid visual feature extraction method for audio-visual speech recognition

  • Authors:
  • Guanyong Wu;Jie Zhu;Haihua Xu

  • Affiliations:
  • Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China;Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China;Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a hybrid visual feature extraction method that combines the extended locally linear embedding (LLE) with visemic linear discriminant analysis (LDA) was presented for the audio-visual speech recognition (AVSR). Firstly the extended LLE is presented to reduce the dimension of the mouth images, which constrains the scope of finding mouth data neighborhood to the corresponding individual's dataset instead of the whole dataset, and then maps the high dimensional mouth image matrices into a low-dimensional Euclidean space. Secondly we project the feature vectors on the visemic linear discriminant space to find the optimal classification. Finally, in the audio-visual fusion period, the minimum classification error (MCE) training based on the segmental generalized probabilistic descent (GPD) is applied to audio and visual stream weights optimization. Experimental results conducted the CUAVE database show that the proposed method achieves a significant performance than that of the classical PCA and LDA based method in visual-only speech recognition. Further experimental results show the robustness of the MCE based discriminative training method in noisy environment.