Modeling focus of attention for meeting indexing
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Investigating automatic dominance estimation in groups from visual attention and speaking activity
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Modeling the Personality of Participants During Group Interactions
UMAP '09 Proceedings of the 17th International Conference on User Modeling, Adaptation, and Personalization: formerly UM and AH
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
Recognizing visual focus of attention from head pose in natural meetings
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Modeling focus of attention for meeting indexing based on multiple cues
IEEE Transactions on Neural Networks
Employing social gaze and speaking activity for automatic determination of the Extraversion trait
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Proceedings of the 15th ACM on International conference on multimodal interaction
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
This paper presents a multimodal framework employing eye-gaze, head-pose and speech cues to explain observed social attention patterns in meeting scenes. We first investigate a few hypotheses concerning social attention and characterize meetings and individuals based on ground-truth data. This is followed by replication of ground-truth results through automated estimation of eye-gaze, head-pose and speech activity for each participant. Experimental results show that combining eye-gaze and head-pose estimates decreases error in social attention estimation by over 26%.