Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: recent advances
The human-computer interaction handbook
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Towards integrated microplanning of language and iconic gesture for multimodal output
Proceedings of the 6th international conference on Multimodal interfaces
A user interface framework for multimodal VR interactions
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Understanding Coverbal Ionic Gestures in Shape Descriptions
Understanding Coverbal Ionic Gestures in Shape Descriptions
Semantic Information and Local Constraints for Parametric Parts in Interactive Virtual Construction
SG '07 Proceedings of the 8th international symposium on Smart Graphics
Processing Iconic Gestures in a Multimodal Virtual Construction Environment
Gesture-Based Human-Computer Interaction and Simulation
Hi-index | 0.00 |
This paper presents a model for the unified semantic representation of shape conveyed by speech and coverbal 3-D gestures. The representation is tailored to capture the semantic contributions of both modalities during free descriptions of objects. It is shown how the semantic content of shape-related adjectives, nouns, and iconic gestures can be modeled and combined when they occur together in multimodal utterances like "a longish bar" + iconic gesture. The model has been applied for the development of a prototype system for gesture recognition and integration with speech.