On Comparing Classifiers: Pitfalls toAvoid and a Recommended Approach
Data Mining and Knowledge Discovery
How to find trouble in communication
Speech Communication - Special issue on speech and emotion
Analysis and compensation of stressed and noisy speech with application to robust automatic recognition
ASR for emotional speech: Clarifying the issues and enhancing performance
Neural Networks - Special issue: Emotion and brain
A study of speech recognition for children and the elderly
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 01
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 04
A Nonlinear Mapping for Data Structure Analysis
IEEE Transactions on Computers
User Modeling and User-Adapted Interaction
Emotion recognition from speech: Putting ASR in the loop
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
Visualization of voice disorders using the sammon transform
TSD'06 Proceedings of the 9th international conference on Text, Speech and Dialogue
Determination of nonprototypical valence and arousal in popular music: features and performances
EURASIP Journal on Audio, Speech, and Music Processing - Special issue on scalable audio-content analysis
Affective speaker state analysis in the presence of reverberation
International Journal of Speech Technology
Tandem decoding of children's speech for keyword detection in a child-robot interaction scenario
ACM Transactions on Speech and Language Processing (TSLP)
Paralinguistics in speech and language-State-of-the-art and the challenge
Computer Speech and Language
Hi-index | 0.00 |
The automatic recognition of children's speech is well known to be a challenge, and so is the influence of affect that is believed to downgrade performance of a speech recogniser. In this contribution, we investigate the combination of both phenomena. Extensive test runs are carried out for 1 k vocabulary continuous speech recognition on spontaneous motherese, emphatic, and angrychildren's speech as opposed to neutralspeech. The experiments address the question how specific emotions influence word accuracy. In a first scenario, "emotional" speech recognisers are compared to a speech recogniser trained on neutralspeech only. For this comparison, equal amounts of training data are used for each emotion-related state. In a second scenario, a "neutral" speech recogniser trained on large amounts of neutralspeech is adapted by adding only some emotionally coloured data in the training process. The results show that emphaticand angryspeech is recognised best--even better than neutralspeech--and that the performance can be improved further by adaptation of the acoustic and linguistic models. In order to show the variability of emotional speech, we visualise the distribution of the four emotion-related states in the MFCC space by applying a Sammon transformation.