C4.5: programs for machine learning
C4.5: programs for machine learning
JAM: a BDI-theoretic mobile agent architecture
Proceedings of the third annual conference on Autonomous Agents
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Video cut editing rule based on participants' gaze in multiparty conversation
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Information state and dialogue management in the TRINDI dialogue move engine toolkit
Natural Language Engineering
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
Human perception of intended addressee during computer-assisted meetings
Proceedings of the 8th international conference on Multimodal interfaces
Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent
Proceedings of the 9th international conference on Multimodal interfaces
The design of a generic framework for integrating ECA components
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Audio Analysis of Human/Virtual-Human Interaction
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Addressee identification for human-human-agent multiparty conversations in different proxemics
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Proceedings of the 15th ACM on International conference on multimodal interaction
Context aware addressee estimation for human robot interaction
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
In multi-user human-agent interaction, the agent should respond to the user when an utterance is addressed to it. To do this, the agent needs to be able to judge whether the utterance is addressed to the agent or to another user. This study proposes a method for estimating the addressee based on the prosodic features of the user's speech and head direction (approximate gaze direction). First, a WOZ experiment is conducted to collect a corpus of human-humanagent triadic conversations. Then, analysis is performed to find out whether the prosodic features as well as head direction information are correlated with the addressee-hood. Based on this analysis, a SVM classifier is trained to estimate the addressee by integrating both the prosodic features and head movement information. Finally, a prototype agent equipped with this real-time addressee estimation mechanism is developed and evaluated.