The Rickel Gaze Model: A Window on the Mind of a Virtual Human
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Estimating User's Conversational Engagement Based on Gaze Behaviors
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Visual data mining of multimedia data for social and behavioral studies
Information Visualization
Investigating multimodal real-time patterns of joint attention in an hri word learning task
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Active Information Selection: Visual Attention Through the Hands
IEEE Transactions on Autonomous Mental Development
Hi-index | 0.00 |
A better understanding of the human user's expectations and sensitivities to the real-time behavior generated by virtual agents can provide insightful empirical data and infer useful principles to guide the design of intelligent virtual agents. In light of this, we propose and implement a research framework to systematically study and evaluate different important aspects of multimodal real-time interactions between humans and virtual agents. Our platform allows the virtual agent to keep track of the user's gaze and hand movements in real time, and adjust his own behaviors accordingly. Multimodal data streams are collected in human-avatar interactions including speech, eye gaze, hand and head movements from both the human user and the virtual agent, which are then used to discover fine-grained behavioral patterns in human-agent interactions. We present a pilot study based on the proposed framework as an example of the kinds of research questions that can be rigorously addressed and answered. This first study investigating human-agent joint attention reveals promising results about the role and functioning of joint attention in human-avatar interactions.