Lip reading of hearing impaired persons using HMM

  • Authors:
  • N. Puviarasan;S. Palanivel

  • Affiliations:
  • Department of Computer Science and Engineering, Annamalai University, Annamalainagar 608 002, India;Department of Computer Science and Engineering, Annamalai University, Annamalainagar 608 002, India

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2011

Quantified Score

Hi-index 12.05

Visualization

Abstract

This paper describes a method for lip reading of hearing impaired persons. The term lip reading refers to recognizing the spoken words using visual speech information such as lip movements. The visual speech video of the hearing impaired person is given as input to the face detection module for detecting the face region. The region of the mouth is determined relative to the face region. The mouth images are used for feature extraction. The features are extracted using discrete cosine transform (DCT) and discrete wavelet transform (DWT). Then, these features are applied separately as inputs to the hidden markov model (HMM) for recognizing the visual speech. To understand the visual speech of hearing impaired person in cash collection counters, 33 words are chosen. For each word, 20 samples are collected for training the HMM model and another five samples are used for testing the model. The experimental results show that the method gives the performance of 91.0% for the DCT based lip features and 97.0% for DWT based lip features.