Proceedings of the 14th ACM international conference on Multimodal interaction
AVEC 2012: the continuous audio/visual emotion challenge - an introduction
Proceedings of the 14th ACM international conference on Multimodal interaction
AVEC 2012: the continuous audio/visual emotion challenge
Proceedings of the 14th ACM international conference on Multimodal interaction
Audio-visual emotion challenge 2012: a simple approach
Proceedings of the 14th ACM international conference on Multimodal interaction
Image and Vision Computing
International Journal of Synthetic Emotions
Laugh-aware virtual agent and its impact on user amusement
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Correlated-spaces regression for learning continuous emotion dimensions
Proceedings of the 21st ACM international conference on Multimedia
1000 songs for emotional analysis of music
Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
AVEC 2013: the continuous audio/visual emotion and depression recognition challenge
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an "operator” simulating a SAL agent, in different configurations: Solid SAL (designed so that operators displayed an appropriate nonverbal behavior) and Semi-automatic SAL (designed so that users' experience approximated interacting with a machine). We then recorded user interactions with the developed system, Automatic SAL, comparing the most communicatively competent version to versions with reduced nonverbal skills. High quality recording was provided by five high-resolution, high-framerate cameras, and four microphones, recorded synchronously. Recordings total 150 participants, for a total of 959 conversations with individual SAL characters, lasting approximately 5 minutes each. Solid SAL recordings are transcribed and extensively annotated: 6-8 raters per clip traced five affective dimensions and 27 associated categories. Other scenarios are labeled on the same pattern, but less fully. Additional information includes FACS annotation on selected extracts, identification of laughs, nods, and shakes, and measures of user engagement with the automatic system. The material is available through a web-accessible database.