Distance transformations in digital images
Computer Vision, Graphics, and Image Processing
Synergistic use of direct manipulation and natural language
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A synthetic visual environment with hand gesturing and voice input
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
ISSD-93 Selected papers presented at the international symposium on Spoken dialogue
NEIMO, a multiworkstation usability lab for observing and analyzing multimodal interaction
Conference Companion on Human Factors in Computing Systems
Iconic: speech and depictive gestures at the human-machine interface
CHI '94 Conference Companion on Human Factors in Computing Systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Computer Processing of Line-Drawing Images
ACM Computing Surveys (CSUR)
Designing a human-centered, multimodal GIS interface to support emergency management
Proceedings of the 10th ACM international symposium on Advances in geographic information systems
Applying the Wizard of Oz Technique to the Study of Multimodal Systems
EWHCI '93 Selected papers from the Third International Conference on Human-Computer Interaction
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
ISMAR '03 Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality
When do we interact multimodally?: cognitive load and multimodal communication patterns
Proceedings of the 6th international conference on Multimodal interfaces
Wizard-of-Oz prototyping for co-operative interaction design of graphical user interfaces
Proceedings of the third Nordic conference on Human-computer interaction
A 2D-3D integrated environment for cooperative work
Proceedings of the ACM symposium on Virtual reality software and technology
Multimodal Interaction with a Wearable Augmented Reality System
IEEE Computer Graphics and Applications
Proceedings of the 8th international conference on Multimodal interfaces
"Move the couch where?": developing an augmented reality multimodal interface
ISMAR '06 Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality
An evaluation of an augmented reality multimodal interface using speech and paddle gestures
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
OpenWizard: une approche pour la création et l'évaluation rapide de prototypes multimodaux
Proceedings of the 21st International Conference on Association Francophone d'Interaction Homme-Machine
Developing speech input for virtual reality applications: A reality based interaction approach
International Journal of Human-Computer Studies
Advances in Human-Computer Interaction
Two user studies on creation and evaluation of use scenarios for mixed reality communication
Proceedings of the International Working Conference on Advanced Visual Interfaces
WozARd: a wizard of oz tool for mobile AR
Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services
Hi-index | 0.00 |
In this paper we describe a Wizard of Oz (WOz) user study of an Augmented Reality (AR) interface that uses multimodal input (MMI) with natural hand interaction and speech commands. Our goal is to use a WOz study to help guide the creation of a multimodal AR interface which is most natural to the user. In this study we used three virtual object arranging tasks with two different display types (a head mounted display, and a desktop monitor) to see how users used multimodal commands, and how different AR display conditions affect those commands. The results provided valuable insights into how people naturally interact in a multimodal AR scene assembly task. For example, we discovered the optimal time frame for fusing speech and gesture commands into a single command. We also found that display type did not produce a significant difference in the type of commands used. Using these results, we present design recommendations for multimodal interaction in AR environments.