Looking at People: Sensing for Ubiquitous and Wearable Computing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
Visual Gesture Interfaces for Virtual Environments
AUIC '00 Proceedings of the First Australasian User Interface Conference
Free-Hand Pointer by Use of an Active Stereo Vision System
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
Vision-based hand pose estimation: A review
Computer Vision and Image Understanding
The catchment feature model: a device for multimodal fusion and a bridge between signal and sense
EURASIP Journal on Applied Signal Processing
Navigating a 3D virtual environment of learning objects by hand gestures
International Journal of Advanced Media and Communication
A dynamic Bayesian approach to computational Laban shape quality analysis
Advances in Human-Computer Interaction
Visual tracking of independently moving body and arms
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Features extraction from hand images based on new detection operators
Pattern Recognition
An integrated robot vision system for multiple human tracking and silhouette extraction
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
Hi-index | 0.00 |
In recent years there has been tremendous progress in 3-D immersive display and virtual reality (VR) technologies. Scientific visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for "natural" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR visual computing environment for molecular biolo-gists. The free hand gestures are used for manipulating, the37D graphical display together with a set of speech commands. We concentrate on the visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.