Grasping reality through illusion—interactive graphics serving science
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A survey of design issues in spatial input
UIST '94 Proceedings of the 7th annual ACM symposium on User interface software and technology
The go-go interaction technique: non-linear mapping for direct manipulation in VR
Proceedings of the 9th annual ACM symposium on User interface software and technology
Synchronization of speech and hand gestures during multimodal human-computer interaction
CHI 98 Cconference Summary on Human Factors in Computing Systems
The integrality of speech in multimodal interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Testbed evaluation of virtual environment interaction techniques
Proceedings of the ACM symposium on Virtual reality software and technology
A system for interactive molecular dynamics simulation
I3D '01 Proceedings of the 2001 symposium on Interactive 3D graphics
Developing an efficient technique of selection and manipulation in immersive V.E.
VRST '00 Proceedings of the ACM symposium on Virtual reality software and technology
IEEE MultiMedia
SMD: Visual Steering of Molecular Dynamics for Protein Design
IEEE Computational Science & Engineering
Multimodal Interaction for 2D and 3D Environments
IEEE Computer Graphics and Applications
Travel in Immersive Virtual Environments: An Evaluation of Viewpoint Motion Control Techniques
VRAIS '97 Proceedings of the 1997 Virtual Reality Annual International Symposium (VRAIS '97)
A user interface framework for multimodal VR interactions
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
3D Virtual worlds and the metaverse: Current status and future possibilities
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
We present a novel modular approach of integrating multiple input/output (I/O) modes in a virtual environment that imitate the natural, intuitive and effective human interaction behavior. The I/O modes that are used in this research are spatial tracking of two hands, fingers gesture recognition, head/body spatial tracking, voice recognition (discrete recognition for simple commands, and continuous recognition for natural language input), immersive stereo display and synthesized speech output. The intuitive natural interaction is achieved through several stages: identify all the tasks that need to be performed, group the similar tasks and assign them to a particular mode such that it imitates the physical world. This modular approach allows inclusion and removal of additional input and output modes as well as additional number of users easily. We described this multimodal interaction paradigm by applying it to a real world application: visualizing, modeling and fitting protein molecular structures in an immersive virtual environment.