A design space for multimodal systems: concurrent processing and data fusion
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Towards a tool for predicting speech functionality
Speech Communication
Multimodal error correction for speech user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
An interaction constraints model for mobile and wearable computer-aided engineering systems in industrial applications
Error recovery in a blended style eye gaze and speech interface
Proceedings of the 5th international conference on Multimodal interfaces
Drawing pictures with natural language and direct manipulation
COLING '94 Proceedings of the 15th conference on Computational linguistics - Volume 2
Multimodal output specification / simulation platform
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
A framework for the combination and characterization of output modalities
DSV-IS'00 Proceedings of the 7th international conference on Design, specification, and verification of interactive systems
A pattern-based methodology for multimodal interaction design
TSD'06 Proceedings of the 9th international conference on Text, Speech and Dialogue
Hi-index | 0.00 |
This paper outlines a research plan with the purpose of combining model-based methodology and multimodal interaction. This work picks up frameworks such as modality theory, TYCOON and CARE and correlates them to approaches for context of use modelling such as the interaction constraints model and the unifying reference framework for multi-target user interfaces. This research shall result in methodological design support for multimodal interaction. The resulting framework will consist of methodological design support, such as a design pattern language for multimodal interaction and a set of model-based notational elements.