Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Video cut editing rule based on participants' gaze in multiparty conversation
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
Human perception of intended addressee during computer-assisted meetings
Proceedings of the 8th international conference on Multimodal interfaces
Toward open-microphone engagement for multiparty interactions
Proceedings of the 8th international conference on Multimodal interfaces
Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent
Proceedings of the 9th international conference on Multimodal interfaces
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Hi-index | 0.00 |
This paper proposes a method for identifying the addressee based on speech and gaze information, and shows that the proposed method can be applicable to human-human-agent multiparty conversations in different proxemics. First, we collected human-human-agent interaction in different proxemics, and by analyzing the data, we found that people spoke with a higher tone of voice and more loudly and slowly when they talked to the agent. We also confirmed that this speech style was consistent regardless of the proxemics. Then, by employing SVM, we proposed a general addressee estimation model that can be used in different proxemics, and the model achieved over 80% accuracy in 10-fold cross-validation.