Object associations: a simple and practical approach to virtual 3D manipulation
I3D '95 Proceedings of the 1995 symposium on Interactive 3D graphics
VRPN: a device-independent, network-transparent VR peripheral system
VRST '01 Proceedings of the ACM symposium on Virtual reality software and technology
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
Proceedings of the 5th international conference on Multimodal interfaces
Using multimodal interaction to navigate in arbitrary virtual VRML worlds
Proceedings of the 2001 workshop on Perceptive user interfaces
3D User Interfaces: Theory and Practice
3D User Interfaces: Theory and Practice
Introducing semantic information during conceptual modelling of interaction for virtual environments
Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction
Hi-index | 0.00 |
This paper presents a multimodal interaction framework for semantic 3D object manipulation in the virtual reality. In our framework, interaction devices such as keyboard, mouse, joystick, tracker, can be combined with speech utterance to give a command to the system. We define an object ontology based on common sense knowledge which defines relationships between virtual objects. By taking into account the current user context and the object ontology, semantic integration component integrates the interpretation result from input manager, and then sends the result to the interaction manager. That result will be mapped into a proper object manipulation. Thus, the system can understand the user intention and assist him for achieving his goal in the handling process, instead of relying entirely on the user's control upon the interaction device and the object, avoiding nonsensical manipulations.