Fundamentals of speech recognition
Fundamentals of speech recognition
Machine Learning
Comparison of different implementations of MFCC
Journal of Computer Science and Technology
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Object Recognition with Features Inspired by Visual Cortex
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Neural Computation
2005 Special Issue: Challenges in real-life emotion annotation and machine learning based detection
Neural Networks - Special issue: Emotion and brain
A Fast Biologically Inspired Algorithm for Recurrent Motion Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object Class Recognition and Localization Using Sparse Features with Limited Receptive Fields
International Journal of Computer Vision
The GMM-SVM Supervector Approach for the Recognition of the Emotional Status from Speech
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
RASTA-PLP speech analysis technique
ICASSP'92 Proceedings of the 1992 IEEE international conference on Acoustics, speech and signal processing - Volume 1
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Multimodal emotion classification in naturalistic user behavior
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: towards mobile and intelligent interaction environments - Volume Part III
AVEC 2011-the first international audio/visual emotion challenge
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
A hidden markov model based approach for facial expression recognition in image sequences
ANNPR'10 Proceedings of the 4th IAPR TC3 conference on Artificial Neural Networks in Pattern Recognition
Multiple classifier systems for the recogonition of human emotions
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Proceedings of the 14th ACM international conference on Multimodal interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
On instance selection in audio based emotion recognition
ANNPR'12 Proceedings of the 5th INNS IAPR TC 3 GIRPR conference on Artificial Neural Networks in Pattern Recognition
A companion technology for cognitive technical systems
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework
Image and Vision Computing
Hi-index | 0.01 |
Research activities in the field of human-computer interaction increasingly addressed the aspect of integrating some type of emotional intelligence. Human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, and therefore the classification of human emotions should be considered as a multimodal pattern recognition problem. The aim of our paper is to investigate multiple classifier systems utilizing audio and visual features to classify human emotional states. For that a variety of features have been derived. From the audio signal the fundamental frequency, LPCand MFCC coefficients, and RASTA-PLP have been used. In addition to that two types of visual features have been computed, namely form and motion features of intermediate complexity. The numerical evaluation has been performed on the four emotional labels Arousal, Expectancy, Power, Valence as defined in the AVEC data set. As classifier architectures multiple classifier systems are applied, these have been proven to be accurate and robust against missing and noisy data.