Intent capturing through multimodal inputs

  • Authors:
  • Weimin Guo;Cheng Cheng;Mingkai Cheng;Yonghan Jiang;Honglin Tang

  • Affiliations:
  • School of Computer Science, Beijing Institute of Technology, Beijing, PRC,Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, Beijing, PRC;School of Computer Science, Beijing Institute of Technology, Beijing, PRC,Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, Beijing, PRC;School of Computer Science, Beijing Institute of Technology, Beijing, PRC,Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, Beijing, PRC;School of Computer Science, Beijing Institute of Technology, Beijing, PRC,Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, Beijing, PRC;School of Computer Science, Beijing Institute of Technology, Beijing, PRC,Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, Beijing, PRC

  • Venue:
  • HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Virtual manufacturing environments need complex and accurate 3D human-computer interaction. One main problem of current virtual environments (VEs) is the heavy overloads of the users on both cognitive and motor operational aspects. This paper investigated multimodal intent delivery and intent inferring in virtual environments. Eye gazing modality is added into virtual assembly system. Typical intents expressed by dual hands and eye gazing modalities are designed. The reliability and accuracy of eye gazing modality is examined through experiments. The experiments showed that eye gazing and hand multimodal cooperation has a great potential to enhance the naturalness and efficiency of human-computer interaction (HCI).