Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
Emotional speech: towards a new generation of databases
Speech Communication - Special issue on speech and emotion
How to find trouble in communication
Speech Communication - Special issue on speech and emotion
Predicting automatic speech recognition performance using prosodic cues
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 04
Introduction: 'Emotion and brain: Understanding emotions and modelling their recognition'
Neural Networks - Special issue: Emotion and brain
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
The Composite Sensing of Affect
Affect and Emotion in Human-Computer Interaction
The Effect of Emotional Speech on a Smart-Home Application
IEA/AIE '08 Proceedings of the 21st international conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems: New Frontiers in Applied Artificial Intelligence
Exploiting a Vowel Based Approach for Acted Emotion Recognition
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
TSD'07 Proceedings of the 10th international conference on Text, speech and dialogue
Validating a multilingual and multimodal affective database
UI-HCII'07 Proceedings of the 2nd international conference on Usability and internationalization
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
Audio-visual spontaneous emotion recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
On the impact of children's emotional speech on acoustic and language models
EURASIP Journal on Audio, Speech, and Music Processing - Special issue on atypical speech
Affective speaker state analysis in the presence of reverberation
International Journal of Speech Technology
A multimodal database for mimicry analysis
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Paralinguistics in speech and language-State-of-the-art and the challenge
Computer Speech and Language
Multimedia Tools and Applications
Hi-index | 0.00 |
There are multiple reasons to expect that recognising the verbal content of emotional speech will be a difficult problem, and recognition rates reported in the literature are in fact low. Including information about prosody improves recognition rate for emotions simulated by actors, but its relevance to the freer patterns of spontaneous speech is unproven. This paper shows that recognition rate for spontaneous emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. The models are derived by adapting an already existing corpus, the British National Corpus (BNC). An emotional lexicon is used to identify emotionally coloured words, and sentences containing these words are recombined with the BNC to form a corpus with a raised proportion of emotional material. Using a language model based on that technique improves recognition rate by about 20%.