Infant-like social interactions between a robot and a human caregiver
Adaptive Behavior
Communications of the ACM
A multimodal learning interface for grounding spoken language in sensory perceptions
ACM Transactions on Applied Perception (TAP)
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
Towards a model of face-to-face grounding
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Recognizing gaze aversion gestures in embodied conversational discourse
Proceedings of the 8th international conference on Multimodal interfaces
Incremental natural language processing for HRI
Proceedings of the ACM/IEEE international conference on Human-robot interaction
First steps toward natural human-like HRI
Autonomous Robots
Precision timing in human-robot interaction: coordination of head movement and utterance
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Visual attention in spoken human-robot interaction
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Visual data mining of multimedia data for social and behavioral studies
Information Visualization
Investigating multimodal real-time patterns of joint attention in an hri word learning task
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Multimodal interaction with a virtual character in interactive storytelling
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Context-based word acquisition for situated dialogue in a virtual world
Journal of Artificial Intelligence Research
Real-time adaptive behaviors in multimodal human-avatar interactions
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
See what i'm saying?: using Dyadic Mobile Eye tracking to study collaborative reference
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Visual attention and eye gaze during multiparty conversations with distractions
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
IDA'10 Proceedings of the 9th international conference on Advances in Intelligent Data Analysis
Active Information Selection: Visual Attention Through the Hands
IEEE Transactions on Autonomous Mental Development
Enabling effective human-robot interaction using perspective-taking in robots
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Are you looking at me?: perception of robot attention is mediated by gaze type and group size
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Recognizing gaze pattern for human robot interaction
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
Efficient collaborations between interacting agents, be they humans, virtual or embodied agents, require mutual recognition of the goal, appropriate sequencing and coordination of each agent's behavior with others, and making predictions from and about the likely behavior of others. Moment-by-moment eye gaze plays an important role in such interaction and collaboration. In light of this, we used a novel experimental paradigm to systematically investigate gaze patterns in both human-human and human-agent interactions. Participants in the study were asked to interact with either another human or an embodied agent in a joint attention task. Fine-grained multimodal behavioral data were recorded including eye movement data, speech, first-person view video, which were then analyzed to discover various behavioral patterns. Those patterns show that human participants are highly sensitive to momentary multimodal behaviors generated by the social partner (either another human or an artificial agent) and they rapidly adapt their gaze behaviors accordingly. Our results from this data-driven approach provide new findings for understanding micro-behaviors in human-human communication which will be critical for the design of artificial agents that can generate human-like gaze behaviors and engage in multimodal interactions with humans.