Combining Global and Local Classifiers for Lipreading

  • Authors:
  • Shengping Zhang;Hongxun Yao;Yuqi Wan;Dan Wang

  • Affiliations:
  • School of Computer Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China;School of Computer Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China;School of Computer Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China;School of Computer Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China

  • Venue:
  • ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Lipreading has become a hot research topic in recent years since the visual information extracted from the lip movement has been shown to improve the performance of automatic speech recognition (ASR) system especially under noisy environments [1]-[3], [5]. There are two important issues related to lipreading: 1) how to extract the most efficient features from lip image sequences, 2) how to build lipreading models. This paper mainly focuses on how to choose more efficient features for lipreading.