Quantifying interpersonal influence in face-to-face conversations based on visual attention patterns
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Automatic detection of group functional roles in face to face interactions
Proceedings of the 8th international conference on Multimodal interfaces
Using audio and video features to classify the most dominant person in a group meeting
Proceedings of the 15th international conference on Multimedia
Using the influence model to recognize functional roles in meetings
Proceedings of the 9th international conference on Multimodal interfaces
Investigating automatic dominance estimation in groups from visual attention and speaking activity
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Predicting the dominant clique in meetings through fusion of nonverbal cues
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Modeling dominance in group conversations using nonverbal activity cues
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
The AMI meeting corpus: a pre-announcement
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Dominance detection in meetings using easily obtainable features
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Modeling individual and group actions in meetings with layered HMMs
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia
Face segmentation using skin-color map in videophone applications
IEEE Transactions on Circuits and Systems for Video Technology
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
Automatic prediction of individual performance from "thin slices" of social behavior
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Discovering group nonverbal conversational patterns with topics
Proceedings of the 2009 international conference on Multimodal interfaces
Characterizing conversational group dynamics using nonverbal behaviour
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
IEEE Transactions on Multimedia
VlogSense: Conversational behavior and social attention in YouTube
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP) - Special section on ACM multimedia 2010 best paper candidates, and issue on social media
Investigating the prosody and voice quality of social signals in scenario meetings
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Inferring competitive role patterns in reality TV show through nonverbal analysis
Multimedia Tools and Applications
Analyzing the structure of the emergent division of labor in multiparty collaboration
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
TransformTable: a self-actuated shape-changing digital table
Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces
Detection of division of labor in multiparty collaboration
HCI'13 Proceedings of the 15th international conference on Human Interface and the Management of Information: information and interaction for learning, culture, collaboration and business - Volume Part III
Hi-index | 0.00 |
This paper addresses the automatic estimation of two aspects of social verticality (status and dominance) in small-group meetings using nonverbal cues. The correlation of nonverbal behavior with these social constructs have been extensively documented in social psychology, but their value for computational models is, in many cases, still unknown. We present a systematic study of automatically extracted cues - including vocalic, visual activity, and visual attention cues - and investigate their relative effectiveness to predict both the most-dominant person and the high-status project manager from relative short observations. We use five hours of task-oriented meeting data with natural behavior for our experiments. Our work suggests that, although dominance and role-based status are related concepts, they are not equivalent and are thus not equally explained by the same nonverbal cues. Furthermore, the best cues can correctly predict the person with highest dominance or role-based status with an accuracy of 70% approximately.