Focusing in the comprehension of definite anaphora
Readings in natural language processing
There's more to interaction than meets the eye: some issues in manual input
Human-computer interaction
User and discourse models for multimodal communication
Intelligent user interfaces
Gestures with speech for graphic manipulation
International Journal of Man-Machine Studies
Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering
Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering
Issues in Multimodal Human-Computer Communication
Multimodal Human-Computer Communication, Systems, Techniques, and Experiments
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Human-centered visualization environments
Human-centered visualization environments
Hi-index | 0.00 |
Developing multimodal interfaces is not only a matter of technology. Rather, it implies an adequate tailoring of the interface to the user's communication needs. In command and control applications, the user most often has the initiative, and in that perspective gestures and speech (the user's communication channels) have to be carefully studied to support a sensible interaction style. In this chapter, we introduce the notion of semantic frame to integrate gestures and speech in multimodal interfaces. We describe the main elements of a model that has been developed to integrate the use of both channels, and illustrate the model by two fully implemented systems. Possible extensions of the model are presented to improve the supported style, as technologies develop.