MDS: a multimodal-based dialog system

  • Authors:
  • Jiyong Ma;Wen Gao;Xilin Chen;Shiguan Shan;Wei Zeng;Jie Yan;Hongming Zhang;Jiang Wu;Feng Wu;Chunli Wang

  • Affiliations:
  • Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China;Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China and Department of Computer Science, Harbin Institute of Technology, Harbin, China;Department of Computer Science, Harbin Institute of Technology, Harbin, China;Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China;Department of Computer Science, Harbin Institute of Technology, Harbin, China;Microsoft Research China;Department of Computer Science, Harbin Institute of Technology, Harbin, China;Microsoft Research China;-;Department of Computer Science, Dalian University of Technology, Dalian, China

  • Venue:
  • MULTIMEDIA '00 Proceedings of the eighth ACM international conference on Multimedia
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes MDS: a Multimodal-based Dialog System that supports communication between the hearing impaired and hearing-abled. The system converts sign language to speech, and combines speech with gesture and lip motion using a human face. The features of the human face are derived by doing a 3D feature extraction of the speaker's face, so that the “virtual face” similar to the actual speaker. The main technologies associated with the system include sign language recognition, sign language synthesis and the synchrony of the lip movement and speech. Integration of the sign language recognition, sign language synthesis, speech recognition, speech synthesis and 3D virtual human technologies provides a new way to interact with computers for the hearing impaired.