Referring to Objects with Spoken and Haptic Modalities
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
A probabilistic approach to reference resolution in multimodal user interfaces
Proceedings of the 9th international conference on Intelligent user interfaces
An empirically based system for processing definite descriptions
Computational Linguistics
Salience modeling based on non-verbal modalities for spoken language understanding
Proceedings of the 8th international conference on Multimodal interfaces
Implicit user-adaptive system engagement in speech and pen interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Knowledge and data flow architecture for reference processing in multimodal dialog systems
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Multimodal references in GEORAL Tactile
ReferringPhenomena '97 Referring Phenomena in a Multimedia Context and their Computational Treatment
Hi-index | 0.00 |
We are interested in input to human-machine multimodal interaction systems for geographical information search. In our context of study, the system offers to the user the ability of using speech, gesture and visual modes. The system displays a map on the screen, the user ask the system about sites (hotels, campsites, ...) by specifying a place of search. Referenced places are objects in the visual context like cities, road, river, etc. The system should determine the designated object to complete the understanding process of user's request. In this context, we aim to improve the reference resolution process while taking into account ambiguous designations. In this paper, we focus on the modeling of visual context. In this modeling we take into account the notion of salience, its role in the designation and in the processing methods.