An improved fusion and fission architecture between multi-modalities based on wearable computing

  • Authors:
  • Jung-Hyun Kim;Kwang-Seok Hong

  • Affiliations:
  • School of Information and Communication Engineering, Sungkyunkwan University, Chunchun-dong, Jangan-gu, Suwon, KyungKi-do, Korea;School of Information and Communication Engineering, Sungkyunkwan University, Chunchun-dong, Jangan-gu, Suwon, KyungKi-do, Korea

  • Venue:
  • UIC'07 Proceedings of the 4th international conference on Ubiquitous Intelligence and Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces improved fission rule depending on SNNR (Signal Plus Noise to Noise Ratio) and fuzzy value for simultaneous multimodality, and suggests the Fusion User Interface (hereinafter, FUI) including a synchronization between audio-gesture modalities, based on the embedded KSSL (Korean Standard Sign Language) recognizer using the WPS (Wearable Personal Station for the next generation PC) and Voice-XML. Our approach fuses and recognizes 65 sentential and 157 word language models that are represented by speech and KSSL, then translates the recognition result into synthetic speech and visual illustration (graphical display by HMD-Head Mounted Display) in real-time.