Referring to Objects with Spoken and Haptic Modalities
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Human-Robot dialogue for joint construction tasks
Proceedings of the 8th international conference on Multimodal interfaces
Incremental generation of spatial referring expressions in situated dialog
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Modeling the impact of shared visual information on collaborative reference
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Building a semantically transparent corpus for the generation of referring expressions
INLG '06 Proceedings of the Fourth International Natural Language Generation Conference
Integrating language, vision and action for human robot dialog systems
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Verb Processing in Spoken Commands for Household Security and Appliances
UAHCI '09 Proceedings of the 5th International on ConferenceUniversal Access in Human-Computer Interaction. Part II: Intelligent and Ubiquitous Interaction Environments
A Japanese corpus of referring expressions used in a situated collaboration task
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
Evaluating description and reference strategies in a cooperative human-robot dialogue system
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Incorporating extra-linguistic information into reference resolution in collaborative task dialogue
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Situated reference in a hybrid human-robot interaction system
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Towards an extrinsic evaluation of referring expressions in situated dialogs
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Focusing computational visual attention in multi-modal human-robot interaction
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Co-reference via pointing and haptics in multi-modal dialogues
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Towards mediating shared perceptual basis in situated dialogue
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Improving sentence completion in dialogues with multi-modal features
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
REX-J: Japanese referring expression corpus of situated dialogs
Language Resources and Evaluation
Hi-index | 0.01 |
Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures. When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.