Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Image and Vision Computing
Performance analysis of acoustic emotion recognition for in-car conversational interfaces
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Opensmile: the munich versatile and fast open-source audio feature extractor
Proceedings of the international conference on Multimedia
Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies
IEEE Transactions on Affective Computing
Affective speaker state analysis in the presence of reverberation
International Journal of Speech Technology
Hi-index | 0.00 |
Emotion recognition in real-life conditions faces several challenging factors, which most studies on emotion recognition do not consider. Such factors include background noise, varying recording levels, and acoustic properties of the environment, for example. This paper presents a systematic evaluation of the influence of background noise of various types and SNRs, as well as recording level variations on the performance of automatic emotion recognition from speech. Both, natural and spontaneous as well as acted/prototypical emotions are considered. Besides the well known influence of additive noise, a significant influence of the recording level on the recognition performance is observed. Multi-condition learning with various noise types and recording levels is proposed as a way to increase robustness of methods based on standard acoustic feature sets and commonly used classifiers. It is compared to matched conditions learning and is found to be almost on par for many settings.