Floating search methods in feature selection
Pattern Recognition Letters
Machine Learning
Affective computing
Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
Vocal communication of emotion: a review of research paradigms
Speech Communication - Special issue on speech and emotion
Fully generated scripted dialogue for embodied agents
Artificial Intelligence
A Study of Emotion Recognition and Its Applications
MDAI '07 Proceedings of the 4th international conference on Modeling Decisions for Artificial Intelligence
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Emotion Recognition Based on Physiological Changes in Music Listening
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multi-level Speech Emotion Recognition Based on HMM and ANN
CSIE '09 Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering - Volume 07
Image and Vision Computing
Power-Law Distributions in Empirical Data
SIAM Review
Emotion recognition from speech signals using new harmony features
Signal Processing
IEEE Transactions on Pattern Analysis and Machine Intelligence
A logic for reasoning about counterfactual emotions
Artificial Intelligence
Formant position based weighted spectral features for emotion recognition
Speech Communication
Speech emotional recognition using global and time sequence structure features with MMD
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Evaluation of the affective valence of speech using pitch substructure
IEEE Transactions on Audio, Speech, and Language Processing
Loss-Scaled Large-Margin Gaussian Mixture Models for Speech Emotion Classification
IEEE Transactions on Audio, Speech, and Language Processing
Speech emotion recognition: Features and classification models
Digital Signal Processing
Hi-index | 0.00 |
This study proposes two classes of speech emotional features extracted from electroglottography EGG and speech signal. The power-law distribution coefficients PLDC of voiced segments duration, pitch rise duration, and pitch down duration are obtained to reflect the information of vocal folds excitation. The real discrete cosine transform coefficients of the normalized spectrum of EGG and speech signal are calculated to reflect the information of vocal tract modulation. Two experiments are carried out. One is of proposed features and traditional features based on sequential forward floating search and sequential backward floating search. The other is the comparative emotion recognition based on support vector machine. The results show that proposed features are better than those commonly used in the case of speaker-independent and content-independent speech emotion recognition.