Building visual language parsers
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
A survey of visual language specification and recognition
Visual language theory
Graph Grammars, a new Paradigm for Implementing Visual Languages
ESEC '89 Proceedings of the 2nd European Software Engineering Conference
Graph grammars and diagram editing
Proceedings of the 3rd International Workshop on Graph-Grammars and Their Application to Computer Science
Interactive Multimodal User Interfaces for Mobile Devices
HICSS '04 Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 9 - Volume 9
ICARE: a component-based approach for multimodal interaction
UbiMob '04 Proceedings of the 1st French-speaking conference on Mobility and ubiquity computing
An adaptive approach to collecting multimodal input
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2
Tool-supported single authoring for device independence and multimodality
Proceedings of the 7th international conference on Human computer interaction with mobile devices & services
Hi-index | 0.00 |
The pervasive computing paradigm provides the user with a uniform computing space available everywhere, any time and in the most appropriate form and modality. These aspects produce the need for user interfaces that are usable, multimodal and personalized for each user. In this paper multimodality is discussed. In particular, features and computational issues of the multimodal interaction are analyzed in order to examine methodological aspects for the definition of multimodal interaction languages for pervasive applications. The multimodality is faced at a grammar level, rather than at a dialogue management level. This means that different unimodal inputs are considered as a unique multimodal input that is sent to the dialogue parser that uses the grammar specification to interpret it, rather than to be distinctly interpreted and then combined. The main objective of the paper is thus to explore multimodal interaction for pervasive applications through the use of a multimodal language rather than through the integration of several unimodal languages.