QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Speech and gesture multimodal control of a whole Earth 3D visualization environment
VISSYM '02 Proceedings of the symposium on Data Visualisation 2002
Multimodal Interaction for 2D and 3D Environments
IEEE Computer Graphics and Applications
A Unified Framework for Constructing Multimodal Experiments and Applications
CMC '98 Revised Papers from the Second International Conference on Cooperative Multimodal Communication
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
The Software Architecture of a Real-Time Battlefield Visualization Virtual Environment
VR '99 Proceedings of the IEEE Virtual Reality
Speech/Gesture Interface to a Visual Computing Environment for Molecular Biologists
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume III-Volume 7276 - Volume 7276
ISWC '00 Proceedings of the 4th IEEE International Symposium on Wearable Computers
A comparison of different input devices for a 3D environment
Proceedings of the 14th European conference on Cognitive ergonomics: invent! explore!
Combining 3-D geovisualization with force feedback driven user interaction
Proceedings of the 16th ACM SIGSPATIAL international conference on Advances in geographic information systems
ISMAR '06 Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality
EPCE '09 Proceedings of the 8th International Conference on Engineering Psychology and Cognitive Ergonomics: Held as Part of HCI International 2009
Adding speech recognition support to UML tools
Journal of Visual Languages and Computing
Hi-index | 0.00 |
Novel speech and/or gesture interfaces are candidates for use in future mobile or ubiquitous applications. This paper describes an evaluation of various interfaces for visual navigation of a whole Earth 3D terrain model. A mouse driven interface, a speech interface, a gesture interface, and a multimodal speech and gesture interface were used to navigate to targets placed at various points on the Earth. This study measured each participant's recall of target identity, order, and location as a measure of cognitive load. Timing information as well as a variety of subjective measures including discomfort and user preference were taken. While the familiar and mature mouse interface scored best by most measures, the speech interface also performed well. The gesture and multimodal interface suffered from weaknesses in the gesture modality. Weaknesses in the speech and multimodal modalities are identified and areas for improvement are discussed.