Affective computing
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Automatic Spoken Affect Classification and Analysis
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Automatic Classification of Expressiveness in Speech: A Multi-corpus Study
Speaker Classification II
Relevance vector machine based speech emotion recognition
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
The CASIA audio emotion recognition method for audio/visual emotion challenge 2011
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
PCM'06 Proceedings of the 7th Pacific Rim conference on Advances in Multimedia Information Processing
Classification of emotional speech using 3DEC hierarchical classifier
Speech Communication
Emotion recognition from speech: a review
International Journal of Speech Technology
Duration modeling for emotional speech
ICICA'12 Proceedings of the Third international conference on Information Computing and Applications
Hi-index | 0.00 |
Our goal was to see how much of the affective message we could recover using simple acoustic measures of the speech signal. Using pitch and broad spectral-shape measures, a multidimensional Gaussian mixture-model discriminator classified adult-directed (neutral affect) versus infant-directed speech correctly more than 80% of the time, and classified the affective message of infant-directed speech correctly nearly 70% of the time. We confirmed previous findings that changes in pitch provide an important cue for affective messages. In addition, we found that timbre or cepstral coefficients also provide important information about the affective message. Mothers' speech was significantly easier to classify than fathers' speech, suggesting either clearer distinctions among these messages in mothers' speech to infants, or a difference between fathers and mothers in the acoustic information used to convey these messages. Our research is a step towards machines that sense the "emotional state" of a speaker.