Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Coupling CCG and hybrid logic dependency semantics
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Semiotic schemas: A framework for grounding language in action and perception
Artificial Intelligence - Special volume on connecting language to the world
Incremental generation of spatial referring expressions in situated dialog
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Tutor-based learning of visual categories using different levels of supervision
Computer Vision and Image Understanding
Information fusion for visual reference resolution in dynamic situated dialogue
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
Hi-index | 0.00 |
The paper presents an approach to using structural descriptions, obtained through a human-robot tutoring dialogue, as labels for the visual object models a robot learns. The paper shows how structural descriptions enable relating models for different aspects of one and the same object, and how being able to relate descriptions for visual models and discourse referents enables incremental updating of model descriptions through dialogue (either robot- or human initiated). The approach has been implemented in an integrated architecture for human-assisted robot visual learning.