A computational theory of grounding in natural language conversation
A computational theory of grounding in natural language conversation
Interactive robot task training through dialog and demonstration
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Using vision, acoustics, and natural language for disambiguation
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Incremental natural language processing for HRI
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Autonomy and Common Ground in Human-Robot Interaction: A Field Study
IEEE Intelligent Systems
Using a robot proxy to create common ground in exploration tasks
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Footing in human-robot conversations: how robots might shape participant roles using gaze cues
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Visual attention in spoken human-robot interaction
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Explorations in engagement for humans and robots
Artificial Intelligence
Models for multiparty engagement in open-world dialog
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Recognizing engagement in human-robot interaction
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Learning to balance grounding rationales for dialogue systems
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Designing robot learners that ask good questions
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Integrating word acquisition and referential grounding towards physical world interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Towards mediating shared perceptual basis in situated dialogue
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Hi-index | 0.00 |
In situated human-robot dialogue, although humans and robots are co-present in a shared environment, they have significantly mismatched capabilities in perceiving the shared environment. Their representations of the shared world are misaligned. In order for humans and robots to communicate with each other successfully using language, it is important for them to mediate such differences and to establish common ground. To address this issue, this paper describes a dialogue system that aims to mediate a shared perceptual basis during human-robot dialogue. In particular, we present an empirical study that examines the role of the robot's collaborative effort and the performance of natural language processing modules in dialogue grounding. Our empirical results indicate that in situated human-robot dialogue, a low collaborative effort from the robot may lead its human partner to believe a common ground is established. However, such beliefs may not reflect true mutual understanding. To support truly grounded dialogues, the robot should make an extra effort by making its partner aware of its internal representation of the shared world.