Memory and context for language interpretation
Memory and context for language interpretation
Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
Automatic referent resolution of deictic and anaphoric expressions
Computational Linguistics
Multimodal interaction for distributed interactive simulation
Readings in intelligent user interfaces
Context-Sensitive Help for Multimodal Dialogue
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
MATCH: an architecture for multimodal dialogue systems
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
SmartKom: Foundations of Multimodal Dialogue Systems (Cognitive Technologies)
SmartKom: Foundations of Multimodal Dialogue Systems (Cognitive Technologies)
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Multimodal interactive maps: designing for human performance
Human-Computer Interaction
Proceedings of the 2nd International Workshop on Location and the Web
Robust understanding in multimodal interfaces
Computational Linguistics
A speech mashup framework for multimodal mobile services
Proceedings of the 2009 international conference on Multimodal interfaces
Multimodal local search in Speak4it
Proceedings of the 16th international conference on Intelligent user interfaces
Multimodal interaction patterns in mobile local search
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
Collecting multimodal data in the wild
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
Multimodal dialogue in mobile local search
Proceedings of the 14th ACM international conference on Multimodal interaction
A multimodal dialogue interface for mobile local search
Proceedings of the companion publication of the 2013 international conference on Intelligent user interfaces companion
A unified framework for multimodal retrieval
Pattern Recognition
Hi-index | 0.00 |
Computational models of dialog context have often focused on unimodal spoken dialog or text, using the language itself as the primary locus of contextual information. But as we move from spoken interaction to situated multimodal interaction on mobile platforms supporting a combination of spoken dialog with graphical interaction, touch-screen input, geolocation, and other non-linguistic contextual factors, we will need more sophisticated models of context that capture the influence of these factors on semantic interpretation and dialog flow. Here we focus on how users establish the location they deem salient from the multimodal context by grounding it through interactions with a map-based query system. While many existing systems rely on geolocation to establish the location context of a query, we hypothesize that this approach often ignores the grounding actions users make, and provide an analysis of log data from one such system that reveals errors that arise from that faulty treatment of grounding. We then explore and evaluate, using live field data from a deployed multimodal search system, several different context classification techniques that attempt to learn the location contexts users make salient by grounding them through their multimodal actions.