Synergistic use of direct manipulation and natural language
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IEEE Computer Graphics and Applications
Multimodal interfaces for dynamic interactive maps
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Ecological Interfaces: Extending the Pointing Paradigm by Visual Context
CONTEXT '99 Proceedings of the Second International and Interdisciplinary Conference on Modeling and Using Context
Applying the Wizard of Oz Technique to the Study of Multimodal Systems
EWHCI '93 Selected papers from the Third International Conference on Human-Computer Interaction
Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Reliable Tracking of Human Arm Dynamics by Multiple Cue Integration and Constraint Fusion
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Prosody Based Co-analysis for Continuous Recognition of Coverbal Gestures
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Exploiting prosodic structuring of coverbal gesticulation
Proceedings of the 6th international conference on Multimodal interfaces
Visual and linguistic information in gesture classification
Proceedings of the 6th international conference on Multimodal interfaces
Visual and linguistic information in gesture classification
ACM SIGGRAPH 2006 Courses
Visual and linguistic information in gesture classification
ACM SIGGRAPH 2007 courses
Orientation sensing for gesture-based interaction with smart artifacts
Computer Communications
Improving continuous gesture recognition with spoken prosody
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Functional gestures for human-environment interaction
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
In recent years because of the advances in computer vision research, free hand gestures have been explored as means of human-computer interaction (HCI). Together with improved speech processing technology it is an important step toward natural multimodal HCI. However, inclusion of nonpredefined continuous gestures into a multimodal framework is a challenging problem. In this paper, we propose a structured approach for studying patterns of multimodal language in the context of a 2D-display control. We consider systematic analysis of gestures from observable kinematical primitives to their semantics as pertinent to a linguistic structure. Proposed semantic classification of co-verbal gestures distinguishes six categories based on their spatio-temporal deixis. We discuss evolution of a computational framework for gesture and speech integration which was used to develop an interactive testbed (iMAP). The testbed enabled elicitation of adequate, non-sequential, multimodal patterns in a narrative mode of HCI. Conducted user studies illustrate significance of accounting for the temporal alignment of gesture and speech parts in semantic mapping. Furthermore, co-occurrence analysis of gesture/speech production suggests syntactic organization of gestures at the lexical level.