Teaching and Working with Robots as a Collaboration
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
The utility of affect expression in natural language interactions in joint human-robot tasks
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Incremental natural language processing for HRI
Proceedings of the ACM/IEEE international conference on Human-robot interaction
First steps toward natural human-like HRI
Autonomous Robots
Robot social presence and gender: do females view robots differently than males?
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Precision timing in human-robot interaction: coordination of head movement and utterance
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Providing route directions: design of robot's utterance, gesture, and timing
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Footing in human-robot conversations: how robots might shape participant roles using gaze cues
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Visual attention in spoken human-robot interaction
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Computation for metaphors, analogy, and agents
Active Information Selection: Visual Attention Through the Hands
IEEE Transactions on Autonomous Mental Development
A multimodal real-time platform for studying human-avatar interactions
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Real-time adaptive behaviors in multimodal human-avatar interactions
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Analyzing multimodal time series as dynamical systems
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Adaptive eye gaze patterns in interactions with human and artificial agents
ACM Transactions on Interactive Intelligent Systems (TiiS)
Referent identification process in human-robot multimodal communication
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Shared gaze in remote spoken hri during distributed military operation
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Modelling Shared Attention Through Relational Reinforcement Learning
Journal of Intelligent and Robotic Systems
IDA'10 Proceedings of the 9th international conference on Advances in Intelligent Data Analysis
Incrementally biasing visual search using natural language input
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
Joint attention - the idea that humans make inferences from observable behaviors of other humans by attending to the objects and events that these others humans attend to - has been recognized as a critical component in human-robot interactions. While various HRI studies showed that having robots to behave in ways that support human recognition of joint attention leads to better behavioral outcomes on the human side, there are no studies that investigate the detailed time course of interactive joint attention processes. In this paper, we present the results from an HRI study that investigates the exact time course of human multi-modal attentional processes during an HRI word learning task in an unprecedented way. Using novel data analysis techniques, we are able to demonstrate that the temporal details of human attentional behavior are critical for understanding human expectations of joint attention in HRI and that failing to do so can force humans into assuming unnatural behaviors.