Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Description with Local Binary Patterns: Application to Face Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Facial expression recognition based on Local Binary Patterns: A comprehensive study
Image and Vision Computing
Cost-Effective Solution to Synchronized Audio-Visual Capture Using Multiple Sensors
AVSS '09 Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Opensmile: the munich versatile and fast open-source audio feature extractor
Proceedings of the international conference on Multimedia
Dimensionality reduction and classification analysis on the audio section of the SEMAINE database
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Speech emotion recognition system based on L1 regularized linear regression and decision fusion
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
A psychologically-inspired match-score fusion mode for video-based facial expression recognition
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Continuous emotion recognition using gabor energy filters
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Multiple classifier systems for the classificatio of audio-visual emotional states
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Investigating the use of formant based features for detection of affective dimensions in speech
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
The CASIA audio emotion recognition method for audio/visual emotion challenge 2011
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Modeling latent discriminative dynamic of multi-dimensional affective signals
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Investigating glottal parameters and teager energy operators in emotion recognition
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Classification of emotional speech using 3DEC hierarchical classifier
Speech Communication
Dominance detection in a reverberated acoustic scenario
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Machine analysis and recognition of social contexts
Proceedings of the 14th ACM international conference on Multimodal interaction
AVEC 2012: the continuous audio/visual emotion challenge - an introduction
Proceedings of the 14th ACM international conference on Multimodal interaction
AVEC 2012: the continuous audio/visual emotion challenge
Proceedings of the 14th ACM international conference on Multimodal interaction
Multiple classifier combination using reject options and markov fusion networks
Proceedings of the 14th ACM international conference on Multimodal interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Robust continuous prediction of human emotions using multiscale dynamic cues
Proceedings of the 14th ACM international conference on Multimodal interaction
Elastic net for paralinguistic speech recognition
Proceedings of the 14th ACM international conference on Multimodal interaction
Preserving actual dynamic trend of emotion in dimensional speech emotion recognition
Proceedings of the 14th ACM international conference on Multimodal interaction
Keyword spotting exploiting Long Short-Term Memory
Speech Communication
LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework
Image and Vision Computing
Image and Vision Computing
Fusion of fragmentary classifier decisions for affective state recognition
MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction
Words that Fascinate the Listener: Predicting Affective Ratings of On-Line Lectures
International Journal of Distance Education Technologies
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
AVEC 2013: the continuous audio/visual emotion and depression recognition challenge
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Audiovisual three-level fusion for continuous estimation of Russell's emotion circumplex
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Computer Vision and Image Understanding
Emotion recognition in the wild challenge 2013
Proceedings of the 15th ACM on International conference on multimodal interaction
Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP
Proceedings of the 15th ACM on International conference on multimodal interaction
Emotion recognition in the wild challenge (EmotiW) challenge and workshop summary
Proceedings of the 15th ACM on International conference on multimodal interaction
Shape-based modeling of the fundamental frequency contour for emotion detection in speech
Computer Speech and Language
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
The Audio/Visual Emotion Challenge and Workshop (AVEC 2011) is the first competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. This paper first describes the challenge participation conditions. Next follows the data used - the SEMAINE corpus - and its partitioning into train, development, and test partitions for the challenge with labelling in four dimensions, namely activity, expectation, power, and valence. Further, audio and video baseline features are introduced as well as baseline results that use these features for the three sub-challenges of audio, video, and audiovisual emotion recognition.