Embodied agents for multi-party dialogue in immersive virtual worlds
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Influencing social dynamics in meetings through a peripheral display
Proceedings of the 9th international conference on Multimodal interfaces
Honest Signals: How They Shape Our World
Honest Signals: How They Shape Our World
Social signal processing: Survey of an emerging domain
Image and Vision Computing
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
A multimodal analysis of floor control in meetings
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
We present a procedure for conversational floor annotation and discuss floor types and floor switches in face-to-face meetings and the relation with addressing behavior. It seems that for understanding interactions in meetings an agent needs a layered floor model and that turn and floor changes are constrained by the activities and the roles that the agent and his conversational partners play in these activities. We present statistics about the addressee of the speaker and his role in the ongoing activity and a simple method that predicts the addressee using speaker role and floor state. The results support the expectation that information about the activity and the speaker's role will improve detection and interpretation of social signals from speaker addressee patterns in meetings.