Ten myths of multimodal interaction
Communications of the ACM
A framework for rapid development of multimodal interfaces
Proceedings of the 5th international conference on Multimodal interfaces
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Unification-based multimodal parsing
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Finite-state multimodal integration and understanding
Natural Language Engineering
Spatial ontology for semantic integration in 3D multimodal interaction framework
Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications
A general, abstract model of incremental dialogue processing
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Hi-index | 0.00 |
Humans employ different information channels (modalities) such as speech, pictures and gestures in their communication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.