Addressee identification for human-human-agent multiparty conversations in different proxemics

  • Authors:
  • Naoya Baba;Hung-Hsuan Huang;Yukiko I. Nakano

  • Affiliations:
  • Seikei University, Japan;Ritsumeikan University, Japan;Seikei University, Japan

  • Venue:
  • Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a method for identifying the addressee based on speech and gaze information, and shows that the proposed method can be applicable to human-human-agent multiparty conversations in different proxemics. First, we collected human-human-agent interaction in different proxemics, and by analyzing the data, we found that people spoke with a higher tone of voice and more loudly and slowly when they talked to the agent. We also confirmed that this speech style was consistent regardless of the proxemics. Then, by employing SVM, we proposed a general addressee estimation model that can be used in different proxemics, and the model achieved over 80% accuracy in 10-fold cross-validation.