Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Video cut editing rule based on participants' gaze in multiparty conversation
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
Human perception of intended addressee during computer-assisted meetings
Proceedings of the 8th international conference on Multimodal interfaces
Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent
Proceedings of the 9th international conference on Multimodal interfaces
Audio Analysis of Human/Virtual-Human Interaction
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Facilitating multiparty dialog with gaze, gesture, and speech
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
In multiparty human-agent interaction, the agent should be able to properly respond to a user by determining whether the utterance is addressed to the agent or to another person. This study proposes a model for predicting the addressee by using the acoustic information in speech and head orientation as nonverbal information. First, we conducted a Wizard-of-Oz (WOZ) experiment to collect human-agent triadic conversations. Then, we analyzed whether the acoustic features and head orientations were correlated with addressee-hood. Based on the analysis, we propose an addressee prediction model that integrates acoustic and bodily nonverbal information using SVM.