Combining deictic gestures and natural language for referent identification
COLING '86 Proceedings of the 11th coference on Computational linguistics
Proceedings of the 17th annual ACM symposium on User interface software and technology
Working with robots and objects: revisiting deictic reference for achieving spatial common ground
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Providing route directions: design of robot's utterance, gesture, and timing
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Pointing to space: modeling of deictic interaction referring to regions
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Enabling effective human-robot interaction using perspective-taking in robots
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
It's not polite to point: generating socially-appropriate deictic behaviors towards people
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
As robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to objects of joint interest and adapt their references to various physical, environmental, and task conditions. Humans use a broad range of deictic gestures-gestures that direct attention to collocated objects, persons, or spaces-that include pointing, touching, and exhibiting to help their listeners understand their references. These gestures offer varying levels of support under different conditions, making some gestures more or less suitable for different settings. While these gestures offer a rich space for designing communicative behaviors for robots, a better understanding of how different deictic gestures affect communication under different conditions is critical for achieving effective human-robot interaction. In this paper, we seek to build such an understanding by implementing six deictic gestures on a humanlike robot and evaluating their communicative effectiveness in six diverse settings that represent physical, environmental, and task conditions under which robots are expected to employ deictic communication. Our results show that gestures which come into physical contact with the object offer the highest overall communicative accuracy and that specific settings benefit from the use of particular types of gestures. Our results highlight the rich design space for deictic gestures and inform how robots might adapt their gestures to the specific physical, environmental, and task conditions.