Attention, intentions, and the structure of discourse
Computational Linguistics
Advances in the robust processing of multimodal speech and pen systems
Multimodal interface for human-machine communication
Unification-based multimodal parsing
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
User walkthrough of multimodal access to multidimensional databases
Proceedings of the 6th international conference on Multimodal interfaces
A user interface framework for multimodal VR interactions
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Signal Processing - Special section: Multimodal human-computer interfaces
A model for multimodal representation and processing for reference resolution
Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction
The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Knowledge and data flow architecture for reference processing in multimodal dialog systems
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Modeling and Using Salience in Multimodal Interaction Systems
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Co-reference via pointing and haptics in multi-modal dialogues
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Hi-index | 0.00 |
The gesture input modality considered in multimodal dialogue systems is mainly reduced to pointing or manipulating actions. With an approach based on the spontaneous character of the communication, the treatment of such actions involves many processes. Without any constraints, the user may use gesture in association with speech and may exploit the visual context peculiarities, guiding his articulation of gesture trajectories and his choices of words. The semantic interpretation of multimodal utterances also becomes a complex problem, taking into account varieties of referring expressions, varieties of gestural trajectories, structural parameters from the visual context, and also directives from a specific task.Following the spontaneous approach, we propose to give the maximal understanding capabilities to dialogue systems, to ensure that various interaction modes must be taken into account. Considering the development of haptic devices (as PHANToM) which increase the capabilities of sensations, particularly tactile and kinesthetic ones, we propose to explore a new domain of research concerning the integration of haptic gesture into multimodal dialogue systems, in terms of its possible associations with speech for objects reference and manipulation. We focus in this paper on the compatibility between haptic gesture and multimodal reference models, and on the consequences of processing this new modality on intelligent system architectures, which is not yet enough studied from a semantic point of view.