A Graduated Assignment Algorithm for Graph Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph-based generation of referring expressions
Computational Linguistics
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model
A probabilistic approach to reference resolution in multimodal user interfaces
Proceedings of the 9th international conference on Intelligent user interfaces
A multimodal learning interface for grounding spoken language in sensory perceptions
ACM Transactions on Applied Perception (TAP)
Visual Salience and Reference Resolution in Simulated 3-D Environments
Artificial Intelligence Review
NLTK: the Natural Language Toolkit
ETMTNLP '02 Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics - Volume 1
Optimization in multimodal interpretation
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Incremental natural language processing for HRI
Proceedings of the ACM/IEEE international conference on Human-robot interaction
First steps toward natural human-like HRI
Autonomous Robots
Proceedings of the 13th international conference on Intelligent user interfaces
Grounded semantic composition for visual scenes
Journal of Artificial Intelligence Research
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Utilizing visual attention for cross-modal coreference interpretation
CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
Mental imagery for a conversational robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Towards mediating shared perceptual basis in situated dialogue
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Collaborative effort towards common ground in situated human-robot dialogue
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
In language-based interaction between a human and an artificial agent (e.g., robot) in a physical world, because the human and the agent have different knowledge and capabilities in perceiving the shared environment, referential grounding is very difficult. To facilitate such interaction, it is important for the agent to continuously learn and acquire knowledge about the environment through interactions with humans and incorporate the learned knowledge in grounding references from human utterances. To address this issue, this paper presents a graph-based approach for referential grounding and examines how referential grounding and word acquisition influence each other in physical world interaction. Our empirical results have shown that for most words, automated word acquisition through interaction improves referential grounding performance. However, this is not the case for words describing object types, where human supervision is important. Nevertheless, better referential grounding enables more accurate acquisition of word meanings, which in turn further improves grounding performance for references in subsequent utterances.