The logic of typed feature structures
The logic of typed feature structures
Two-handed gesture in multi-modal natural dialog
UIST '92 Proceedings of the 5th annual ACM symposium on User interface software and technology
Interaction techniques using hand tracking and speech recognition
Multimedia interface design
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Surround-screen projection-based virtual reality: the design and implementation of the CAVE
SIGGRAPH '93 Proceedings of the 20th annual conference on Computer graphics and interactive techniques
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: recent advances
Two pointer input for 3D interaction
Proceedings of the 1997 symposium on Interactive 3D graphics
Communications of the ACM
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Seamless Interaction in Virtual Reality
IEEE Computer Graphics and Applications
Developing Multimodal Interfaces: A Theoretical Framework and Guided Propagation Networks
Multimodal Human-Computer Communication, Systems, Techniques, and Experiments
Multimodal Maps: An Agent-Based Approach
Multimodal Human-Computer Communication, Systems, Techniques, and Experiments
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
An Extensible Interactive Visualization Framework for the Virtual Windtunnel
VRAIS '97 Proceedings of the 1997 Virtual Reality Annual International Symposium (VRAIS '97)
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Real-Time Gesture Recognition by Means of Hybrid Recognizers
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Hi-index | 0.00 |
Recent approaches to providing users with a more natural method of interacting with virtual environment applications have shown that more than one mode of input can be both beneficial and intuitive as a communication medium between humans and computer applications. Hand gestures and speech appear to be two of the most logical since users will typically be in environments that will have them immersed in a virtual world with limited access to traditional input devices such as the keyboard or the mouse. In this paper, we describe an ongoing research project to develop multimodal interfaces that incorporate 3D hand gestures and speech in virtual environments.