A design space for multimodal systems: concurrent processing and data fusion
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Gandalf: an embodied humanoid capable of real-time multimodal dialogue with people
AGENTS '97 Proceedings of the first international conference on Autonomous agents
Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Providing integrated toolkit-level support for ambiguity in recognition-based interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Embodied contextual agent in information delivering application
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
User Modeling for Personalized City Tours
Artificial Intelligence Review
The human-computer interaction handbook
VL '95 Proceedings of the 11th International IEEE Symposium on Visual Languages
Communicative humanoids: a computational model of psychosocial dialogue skills
Communicative humanoids: a computational model of psychosocial dialogue skills
A framework and toolkit for the construction of multimodal learning interfaces
A framework and toolkit for the construction of multimodal learning interfaces
Information state and dialogue management in the TRINDI dialogue move engine toolkit
Natural Language Engineering
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Mobile MultiModal presentation
Proceedings of the 12th annual ACM international conference on Multimedia
Model-Based Specification of Virtual Interaction Environments
VLHCC '04 Proceedings of the 2004 IEEE Symposium on Visual Languages - Human Centric Computing
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 06
Multimodal interactive maps: designing for human performance
Human-Computer Interaction
Multimodal Sentence Similarity in Human-Computer Interaction Systems
KES '07 Knowledge-Based Intelligent Information and Engineering Systems and the XVII Italian Workshop on Neural Networks on Proceedings of the 11th International Conference
Hi-index | 0.00 |
Multimodal interaction systems combine visual information (involving images, text, sketches and so on) with voice, gestures and other modalities to provide flexible and powerful dialogue approaches, enabling users to choose one or more of the multiple interaction modalities. They break down the barriers in adopting mobile devices for value-added services and the use of integrated multiple input modes enables users to benefit from the natural approach used in human communication. This paper deals with the main features of multimodal interaction and systems, starting from the definition of visual language given in Bottoni et al. (1995) and extending it to multimodality. Modal/multimodal message, interpretation and materialisation functions and multimodal sentence are defined. This paper introduces and formally defines the different classes of cooperation between different modes, introducing the time relationships among the involved modalities and the relationships between chunks of information connected with these modalities.