Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Where to look: a study of human-robot engagement
Proceedings of the 9th international conference on Intelligent user interfaces
Direction of attention perception for conversation initiation in virtual environments
Lecture Notes in Computer Science
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
Explorations in engagement for humans and robots
Artificial Intelligence
Models for multiparty engagement in open-world dialog
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Dialog in the open world: platform and applications
Proceedings of the 2009 international conference on Multimodal interfaces
Estimating user's engagement from eye-gaze behaviors in human-agent conversations
Proceedings of the 15th international conference on Intelligent user interfaces
Recognizing engagement in human-robot interaction
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Estimating a user's conversational engagement based on head pose information
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Using group history to identify character-directed utterances in multi-child interactions
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Designing engagement-aware agents for multiparty conversations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaze awareness in conversational agents: Estimating a user's conversational engagement from eye gaze
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on interaction with smart objects, Special section on eye gaze and conversation
How can i help you': comparing engagement classification strategies for a robot bartender
Proceedings of the 15th ACM on International conference on multimodal interaction
Situated multiparty interaction between humans and agents
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
We describe a machine learning approach that allows an open-world spoken dialog system to learn to predict engagement intentions in situ, from interaction. The proposed approach does not require any developer supervision, and leverages spatiotemporal and attentional features automatically extracted from a visual analysis of people coming into the proximity of the system to produce models that are attuned to the characteristics of the environment the system is placed in. Experimental results indicate that a system using the proposed approach can learn to recognize engagement intentions at low false positive rates (e.g. 2--4%) up to 3--4 seconds prior to the actual moment of engagement.