Artificial Intelligence Review - Special issue on integration of natural language and vision processing: recent advances
Usability Engineering
Using multimodal interaction to navigate in arbitrary virtual VRML worlds
Proceedings of the 2001 workshop on Perceptive user interfaces
Comparison of approaches to continuous hand gesture recognition for a visual dialog system
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 06
Behavior3D: an XML-based framework for 3D graphics behavior
Web3D '03 Proceedings of the eighth international conference on 3D Web technology
Utilizing X3D for immersive environments
Proceedings of the ninth international conference on 3D Web technology
Extensions for interactivity and retargeting in X3D
Web3D '05 Proceedings of the tenth international conference on 3D Web technology
Class notes: don't be a WIMP: (http://www.not-for-wimps.org)
ACM SIGGRAPH 2008 classes
Hi-index | 0.00 |
In this work we present a generic architecture for interfacing various input devices to VRML browsers. Concentrating on the aspect of navigation, our system supports the full range of potential input devices from conventional haptic devices like keyboard and mouse over special Virtual-Reality devices like spacemouse and joystick to, as a special feature, semantically higher level input like speech and gesture recognition. The communication between the individual components of the system is based on a context free grammar, allowing abstract modeling of the various devices and handling both discrete and continuous navigation information. Two new node extensions support the VRML author in creating highly customizable 3D applications: The DeviceSensor node allows grabbing arbitrary user input in a systematic way and the Camera node gives full control over the scene view by specifying velocity vectors and thus enabling arbitrary navigation modes. Finally, the proof of concept is given by a prototypical implementation in VRML.