Action as language in a shared visual space
CSCW '04 Proceedings of the 2004 ACM conference on Computer supported cooperative work
Towards a model of face-to-face grounding
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis
Proceedings of the 18th annual ACM symposium on User interface software and technology
Effects of adaptive robot dialogue on information exchange and social relations
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Visual information as a conversational resource in collaborative physical tasks
Human-Computer Interaction
The cog project: building a humanoid robot
Computation for metaphors, analogy, and agents
The design of interactive conversation agents
WSEAS Transactions on Information Science and Applications
VisualChat: A Visualization Tool for Human-Machine Interaction
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
See what i'm saying?: using Dyadic Mobile Eye tracking to study collaborative reference
Proceedings of the ACM 2011 conference on Computer supported cooperative work
How a robot should give advice
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
When a robot provides direction--as a guide, an assistant, or as an instructor--the robot may have to interact with people of different backgrounds and skill sets. Different people require informat on adapted to their level of understanding. In this paper, we explore the use of two simple forms of awareness that a robot might use to infer that a person needs further verbal elaboration during a tool select on task. First, the robot could use an eye tracker for inferring whether the person is looking at the robot and thus in need of further elaboration. Second, the robot could monitor delays in the individual's task progress, indicating that he or she could use further elaboration. We investigated the effects of these two types of awareness on performance time, selection mistakes, and the number of questions people asked the robot. We did not observe any obvious benefits of our gaze awareness manipulation. Awareness of task delays did reduce the number of questions participants' asked compared to our control condition but did not significantly reduce the number of select on mistakes. The mixed results of our investigation suggest that more research is necessary before we can understand how awareness of gaze and awareness of task delay can be successfully implemented in human-robot dialogue.