A Gaze and Speech Multimodal Interface

  • Authors:
  • Qiaohui Zhang;Atsumi Imamiya;Kentaro Go;Xiaoyang Mao

  • Affiliations:
  • -;-;-;-

  • Venue:
  • ICDCSW '04 Proceedings of the 24th International Conference on Distributed Computing Systems Workshops - W7: EC (ICDCSW'04) - Volume 7
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Eyesight and speech are two channels that humansnaturally use to communicate with each other.However both the eye tracking and the speechrecognition technique available today are still far fromperfect. Our goal is find how to effectively make use ofthese error-prone information from both modes, inorder to use one mode to correct errors of anothermode, overcome the immature of recognitiontechniques, resolve the ambiguity of the user'sspeaking, and improve the interaction speed. Theintegration strategies and the evaluation experimentdemonstrate that these two modalities can be usedmultimodally to improve the usability and efficiency ofuser interface, which would not be available to speech-onlyor gaze-only systems.