C4.5: programs for machine learning
C4.5: programs for machine learning
Modelling grounding and discourse obligations using update rules
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Task-evoked pupillary response to mental workload in human-computer interaction
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards an index of opportunity: understanding changes in mental workload during task execution
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards a model of face-to-face grounding
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Direction of attention perception for conversation initiation in virtual environments
Lecture Notes in Computer Science
Head gestures for perceptual interfaces: The role of context in improving recognition
Artificial Intelligence
Proceedings of the 13th international conference on Intelligent user interfaces
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Converting text into agent animations: assigning gestures to text
HLT-NAACL-Short '04 Proceedings of HLT-NAACL 2004: Short Papers
Explorations in engagement for humans and robots
Artificial Intelligence
Learning to predict engagement with a spoken dialog system in open-world settings
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Recognizing engagement in human-robot interaction
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
WEKA---Experiences with a Java Open-Source Project
The Journal of Machine Learning Research
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
In face-to-face conversations, speakers are continuously checking whether the listener is engaged in the conversation, and they change their conversational strategy if the listener is not fully engaged. With the goal of building a conversational agent that can adaptively control conversations, in this study we analyze listener gaze behaviors and develop a method for estimating whether a listener is engaged in the conversation on the basis of these behaviors. First, we conduct a Wizard-of-Oz study to collect information on a user's gaze behaviors. We then investigate how conversational disengagement, as annotated by human judges, correlates with gaze transition, mutual gaze (eye contact) occurrence, gaze duration, and eye movement distance. On the basis of the results of these analyses, we identify useful information for estimating a user's disengagement and establish an engagement estimation method using a decision tree technique. The results of these analyses show that a model using the features of gaze transition, mutual gaze occurrence, gaze duration, and eye movement distance provides the best performance and can estimate the user's conversational engagement accurately. The estimation model is then implemented as a real-time disengagement judgment mechanism and incorporated into a multimodal dialog manager in an animated conversational agent. This agent is designed to estimate the user's conversational engagement and generate probing questions when the user is distracted from the conversation. Finally, we evaluate the engagement-sensitive agent and find that asking probing questions at the proper times has the expected effects on the user's verbal/nonverbal behaviors during communication with the agent. We also find that our agent system improves the user's impression of the agent in terms of its engagement awareness, behavior appropriateness, conversation smoothness, favorability, and intelligence.