A probabilistic approach to reference resolution in multimodal user interfaces
Proceedings of the 9th international conference on Intelligent user interfaces
Optimization in multimodal interpretation
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Using vision, acoustics, and natural language for disambiguation
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Group attention control for communication robots with wizard of OZ approach
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Incremental natural language processing for HRI
Proceedings of the ACM/IEEE international conference on Human-robot interaction
First steps toward natural human-like HRI
Autonomous Robots
Spontaneous speech understanding for robust multi-modal human-robot communication
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Design patterns for sociality in human-robot interaction
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Proceedings of the 13th international conference on Intelligent user interfaces
The oz of wizard: simulating the human for interaction research
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Grounded semantic composition for visual scenes
Journal of Artificial Intelligence Research
Natural Language Processing with Python
Natural Language Processing with Python
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Spatial language for human-robot dialogs
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Integrating word acquisition and referential grounding towards physical world interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Collaborative effort towards common ground in situated human-robot dialogue
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
To enable effective referential grounding in situated human robot dialogue, we have conducted an empirical study to investigate how conversation partners collaborate and mediate shared basis when they have mismatched visual perceptual capabilities. In particular, we have developed a graph-based representation to capture linguistic discourse and visual discourse, and applied inexact graph matching to ground references. Our empirical results have shown that, even when computer vision algorithms produce many errors (e.g. 84.7% of the objects in the environment are mis-recognized), our approach can still achieve 66% accuracy in referential grounding. These results demonstrate that, due to its error-tolerance nature, inexact graph matching provides a potential solution to mediate shared perceptual basis for referential grounding in situated interaction.