Assessing agreement on classification tasks: the kappa statistic
Computational Linguistics
Recognition of Affective Communicative Intent in Robot-Directed Speech
Autonomous Robots
Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
Emotional speech: towards a new generation of databases
Speech Communication - Special issue on speech and emotion
How to find trouble in communication
Speech Communication - Special issue on speech and emotion
Modeling drivers' speech under stress
Speech Communication - Special issue on speech and emotion
Vocal communication of emotion: a review of research paradigms
Speech Communication - Special issue on speech and emotion
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Multimodal expressive embodied conversational agents
Proceedings of the 13th annual ACM international conference on Multimedia
2005 Special Issue: Challenges in real-life emotion annotation and machine learning based detection
Neural Networks - Special issue: Emotion and brain
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Characterizing Emotion in the Soundtrack of an Animated Film: Credible or Incredible?
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
What Should a Generic Emotion Markup Language Be Able to Represent?
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
An adaptive framework for acoustic monitoring of potential hazards
EURASIP Journal on Audio, Speech, and Music Processing
Fiction support for realistic portrayals of fear-type emotional manifestations
Computer Speech and Language
Affect recognition in real life scenarios
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
Spoken emotion recognition using hierarchical classifiers
Computer Speech and Language
Automatic speech emotion recognition using modulation spectral features
Speech Communication
Multiple feature extraction and hierarchical classifiers for emotions recognition
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Paralinguistics in speech and language-State-of-the-art and the challenge
Computer Speech and Language
Fuzzy cognitive maps for artificial emotions forecasting
Applied Soft Computing
Class-specific multiple classifiers scheme to recognize emotions from speech signals
Computer Speech and Language
Hi-index | 0.00 |
This paper addresses the issue of automatic emotion recognition in speech. We focus on a type of emotional manifestation which has been rarely studied in speech processing: fear-type emotions occurring during abnormal situations (here, unplanned events where human life is threatened). This study is dedicated to a new application in emotion recognition - public safety. The starting point of this work is the definition and the collection of data illustrating extreme emotional manifestations in threatening situations. For this purpose we develop the SAFE corpus (situation analysis in a fictional and emotional corpus) based on fiction movies. It consists of 7h of recordings organized into 400 audiovisual sequences. The corpus contains recordings of both normal and abnormal situations and provides a large scope of contexts and therefore a large scope of emotional manifestations. In this way, not only it addresses the issue of the lack of corpora illustrating strong emotions, but also it forms an interesting support to study a high variety of emotional manifestations. We define a task-dependent annotation strategy which has the particularity to describe simultaneously the emotion and the situation evolution in context. The emotion recognition system is based on these data and must handle a large scope of unknown speakers and situations in noisy sound environments. It consists of a fear vs. neutral classification. The novelty of our approach relies on dissociated acoustic models of the voiced and unvoiced contents of speech. The two are then merged at the decision step of the classification system. The results are quite promising given the complexity and the diversity of the data: the error rate is about 30%.