Whose thumb is it anyway?: classifying author personality from weblog text
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Modeling the Personality of Participants During Group Interactions
UMAP '09 Proceedings of the 17th International Conference on User Modeling, Adaptation, and Personalization: formerly UM and AH
Using linguistic cues for the automatic recognition of personality in conversation and text
Journal of Artificial Intelligence Research
Birds of a feather: How personality influences blog writing and reading
International Journal of Human-Computer Studies
The voice of personality: mapping nonverbal vocal behavior into trait attributions
Proceedings of the 2nd international workshop on Social signal processing
Large scale personality classification of bloggers
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Please, tell me about yourself: automatic personality assessment using short self-presentations
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Transcribing Meetings With the AMIDA Systems
IEEE Transactions on Audio, Speech, and Language Processing
FaceTube: predicting personality from facial expressions of emotion in online conversational video
Proceedings of the 14th ACM international conference on Multimodal interaction
Assessing the impact of language style on emergent leadership perception from ubiquitous audio
Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia
Hi-index | 0.00 |
Despite the evidence that social video conveys rich human personality information, research investigating the automatic prediction of personality impressions in vlogging has shown that, amongst the Big-Five traits, automatic nonverbal behavioral cues are useful to predict mainly the Extraversion trait. This finding, also reported in other conversational settings, indicates that personality information may be coded in other behavioral dimensions like the verbal channel, which has been less studied in multimodal interaction research. In this paper, we address the task of predicting personality impressions from vloggers based on what they say in their YouTube videos. First, we use manual transcripts of vlogs and verbal content analysis techniques to understand the ability of verbal content for the prediction of crowdsourced Big-Five personality impressions. Second, we explore the feasibility of a fully-automatic framework in which transcripts are obtained using automatic speech recognition (ASR). Our results show that the analysis of error-free verbal content is useful to predict four of the Big-Five traits, three of them better than using nonverbal cues, and that the errors caused by the ASR system decrease the performance significantly.