Recognition of Affective Communicative Intent in Robot-Directed Speech
Autonomous Robots
Baby ears: a recognition system for affective vocalizations
Speech Communication
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
On the quality of speech produced by impulse driven linear systems
ICASSP '91 Proceedings of the Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference
Expressive gibberish speech synthesis for affective human-computer interaction
TSD'10 Proceedings of the 13th international conference on Text, speech and dialogue
EMOGIB: emotional gibberish speech database for affective human-robot interaction
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Hi-index | 0.00 |
This paper presents our recent and current work on expressive speech synthesis and recognition as enabling technologies for affective robot-child interaction. We show that current expression recognition systems could be used to discriminate between several archetypical emotions, but also that the old adage ”there’s no data like more data” is more than ever valid in this field. A new speech synthesizer was developed that is capable of high quality concatenative synthesis. This system will be used in the robot to synthesize expressive nonsense speech by using prosody transplantation and a recorded database with expressive speech examples. With these enabling components lining up, we are getting ready to start experiments towards hopefully effective child-machine communication of affect and emotion.