An Improved Weight Decision Rule Using SNNR and Fuzzy Value for Multi-modal HCI

  • Authors:
  • Jung-Hyun Kim;Kwang-Seok Hong

  • Affiliations:
  • School of Information and Communication Engineering, Sungkyunkwan University, 300, Chunchun-dong, Jangan-gu, Suwon, KyungKi-do, 440-746, Korea;School of Information and Communication Engineering, Sungkyunkwan University, 300, Chunchun-dong, Jangan-gu, Suwon, KyungKi-do, 440-746, Korea

  • Venue:
  • WILF '07 Proceedings of the 7th international workshop on Fuzzy Logic and Applications: Applications of Fuzzy Sets Theory
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we suggest an improved weight decision rule depending on SNNR (Signal Plus Noise to Noise Ratio) and fuzzy value for simultaneous multi-modality including a synchronization between audio-gesture modalities. In order to insure the validity of the suggested weight decision rule, we implement a wireless PDA-based Multi-Modal Fusion Architecture (hereinafter, MMFA) by coupling embedded speech and KSSL recognizer, which fuses and recognizes 130 word-based instruction models that are represented by speech and KSSL (Korean Standard Sign Language), and then translates recognition result into synthetic speech (TTS) and visual illustration in real-time. In the experimental results, the average recognition rate of the MMFA fusing 2 sensory channels based on wireless PDA was 96.54% in clean environments (e.g. office space), and 93.21% in noisy environments, with the 130 word-based instruction models.