Speech and gesture recognition-based robust language processing interface in noise environment

  • Authors:
  • Jung-Hyun Kim;Kwang-Seok Hong

  • Affiliations:
  • School of Information and Communication Engineering, Sungkyunkwan University, Suwon, KyungKi-do, Korea;School of Information and Communication Engineering, Sungkyunkwan University, Suwon, KyungKi-do, Korea

  • Venue:
  • IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We suggest and implement WPS (Wearable Personal Station) and Web-based robust Language Processing Interface (LPI) integrating speech and sign language (the Korean Standard Sign Language; KSSL). In other word, the LPI is integration language recognition and processing system that can select suitable language recognition system according to noise degree in given noise environment, and it is extended into embedded and ubiquitous-oriented the next generation language processing system that can take the place of a traditional uni-modal language recognition system using only 1 sensory channel based on desk-top PC and wire communication net. In experiment results, while an average recognition rate of uni-modal recognizer using KSSL only is 92.58% and speech only is 93.28%, advanced LPI deduced an average recognition rate of 95.09% for 52 sentential recognition models. Also, average recognition time is 0.3 seconds in LPI.