The application of extension neuro-network on computer-assisted lip-reading recognition for hearing impaired

  • Authors:
  • Yun-Long Lay;Chung-Ho Tsai;Hui-Jen Yang;Chern-Sheng Lin;Chuan-Zhao Lai

  • Affiliations:
  • Department of Electronic Engineering, National Chinyi University of Technology, No. 35, Lane 215, Sec. 1, Chung-San Road, Taipin City, Taichung Hsien 411, Taiwan;Department of Electronic Engineering, National Chinyi University of Technology, No. 35, Lane 215, Sec. 1, Chung-San Road, Taipin City, Taichung Hsien 411, Taiwan;Department of Information Management, National Chinyi University of Technology, No. 35, Lane 215, Sec. 1, Chung-San Road, Taipin City, Taichung Hsien 411, Taiwan;Department of Automatic Control Engineering, Feng Chia University, No. 100, Wenhwa Road, Seatwen, Taichung 40724, Taiwan;Institute of Information and Electrical Energy, National Chinyi University of Technology, No. 35, Lane 215, Sec. 1, Chung-San Road, Taipin City, Taichung Hsien 411, Taiwan

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2008

Quantified Score

Hi-index 12.05

Visualization

Abstract

Within the communication process of human beings, the speaker's facial expression and lip-shape movement contains extremely rich language information. The hearing impaired, aside from using residual listening to communicate with other people, can also use lip reading as a communication tool. As the hearing impaired learn the lip reading using a computer-assisted lip-reading system, they can freely learn lip reading without the constraints of time, place or situation. Therefore, we propose a computer-assisted lip-reading system (CALRS) for phonetic pronunciation recognition of the correct lip-shape with an image processing method, object-oriented language and neuro-network. This system can accurately compare the lip image of Mandarin phonetic pronunciation using self-organizing map neuro-network (SOMNN) and extension theory to help hearing impaired correct their pronunciation.