WPS and voice-xml-based multi-modal fusion agent using SNNR and fuzzy value

  • Authors:
  • Jung-Hyun Kim;Kwang-Seok Hong

  • Affiliations:
  • School of Information and Communication Engineering, Sungkyunkwan University, Suwon, KyungKi-do, Korea;School of Information and Communication Engineering, Sungkyunkwan University, Suwon, KyungKi-do, Korea

  • Venue:
  • NBiS'07 Proceedings of the 1st international conference on Network-based information systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The traditional fusion methods of multiple sensing modalities are summarized with 1) data-level fusion, 2) feature-level fusion and 3) decision-level fusion. This paper suggests the decision-level fusion-oriented novel fusion and fission framework, and it implements WPS (Wearable Personal Station) and Voice-XML-Based Multi-Modal Fusion Agent (hereinafter, MMFA) using audio-gesture modalities. Because the MMFA provides different weight and a feed-back function in individual recognizer, according to SNNR(Signal Plus Noise to Noise Ratio) and fuzzy value, it may select an optimal instruction processing interface under a given situation or noisy environment, and can allow more interactive communication functions in noisy environment. In addition, the MMFA provides a wider range of personalized information more effectively as well as it not need complicated mathematical algorithm and computation costs that are concerned with multidimensional features and patterns (data) size, according as it use a WPS and distributed computing-based database and SQL-logic, for synchronization and fusion between modalities.