The syntactic process
A New Approach to Object-Oriented Middleware
IEEE Internet Computing
Semantics-based representation for multimodal interpretation in conversational systems
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Coupling CCG and hybrid logic dependency semantics
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Human-Robot dialogue for joint construction tasks
Proceedings of the 8th international conference on Multimodal interfaces
Integrating multimodal cues using grammar based models
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Integrating language, vision and action for human robot dialog systems
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Towards a Modeling Language for Designing Auditory Interfaces
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
Evaluating description and reference strategies in a cooperative human-robot dialogue system
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Situated reference in a hybrid human-robot interaction system
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Cognitive Memory for Semantic Agents Architecture in Robotic Interaction
International Journal of Cognitive Informatics and Natural Intelligence
Multi levels semantic architecture for multimodal interaction
Applied Intelligence
Tangible ambient intelligence with semantic agents in daily activities
Journal of Ambient Intelligence and Smart Environments
Hi-index | 0.00 |
We present MultiML, a markup language for the annotation of multimodal human utterances. MultiML is able to represent input from several modalities, as well as the relationships between these modalities. Since MultiML separates general parts of representation from more context-specific aspects, it can easily be adapted for use in a wide range of contexts. This paper demonstrates how speech and gestures are described with MultiML, showing the principles - including hierarchy and underspecification - that ensure the quality and extensibility of MultiML. As a proof of concept, we show how MultiML is used to annotate a sample human-robot interaction in the domain of a multimodal joint-action scenario.