Social interaction: multimodal conversation with social agents
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
An open architecture for robot entertainment
AGENTS '97 Proceedings of the first international conference on Autonomous agents
The interactive museum tour-guide robot
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
The origins of syntax in visually grounded robotic agents
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Hi-index | 0.00 |
This paper conducts an experiment to investigate the effect of a physical constraint on a subject's viewpoint when using spoken language to navigate a robot. In addition, a robot navigation environment named Spondia-II has been developed for the experiment with an actual autonomous mobile robot. It is well known that the meaning of an utterance, such as a demonstrative pronoun, depends on the viewpoint of the speaker or the hearer. In a conversation between people, the primary factor in determining viewpoint is the physical constraints that are mediated by their body movements. This paper notes that these physical constraints also have an effect on viewpoint even when people instruct a robot. Furthermore, it is argued that, the utterance process also would greatly improve if the robot were able to comprehend the constraints.