Automatic Lipreading with Limited Training Data

  • Authors:
  • S. L. Wang;W. H. Lau;S. H. Leung

  • Affiliations:
  • Shanghai Jiaotong University, Shanghai, CHINA;City University of Hong Kong, Kowloon, HONG KONG;Chinese University of Hong Kong, Shatin, HONG KONG

  • Venue:
  • ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 03
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Speech recognition solely based on visual information such as the lip shape and its movement is referred to as lipreading. This paper presents an automatic lipreading technique for speaker dependent (SD) and speaker independent (SI) speech recognition tasks. Since the visual features are derived according to the frame rate of the video sequence, spline representation is then employed to translate the discrete-time sampled visual features into continuous domain. The spline coefficients in the same word class are constrained to have similar expression and can be estimated from the training data by the EM algorithm. In addition, an adaptive multi-model approach is proposed to overcome the variation caused by different speaking style in speaker-independent recognition task. The experiments are carried out to recognize the ten English digits and an accuracy of 96% for speaker dependent recognition and 88% for speaker independent recognition have been achieved, which shows the superiority of our approach compared with other classifiers investigated.