An evaluation of an augmented reality multimodal interface using speech and paddle gestures

  • Authors:
  • Sylvia Irawati;Scott Green;Mark Billinghurst;Andreas Duenser;Heedong Ko

  • Affiliations:
  • Imaging Media Research Center, Korea Institute of Science and Technology;Human Interface Technology Laboratory New Zealand, University of Canterbury;Human Interface Technology Laboratory New Zealand, University of Canterbury;Human Interface Technology Laboratory New Zealand, University of Canterbury;Imaging Media Research Center, Korea Institute of Science and Technology

  • Venue:
  • ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
  • Year:
  • 2006

Quantified Score

Hi-index 0.02

Visualization

Abstract

This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces.