A computational model for the automatic recognition of affect in speech
A computational model for the automatic recognition of affect in speech
Developments in Corpus-Based Speech Synthesis: Approaching Natural Conversational Speech
IEICE - Transactions on Information and Systems
How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books)
How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books)
Using noninvasive wearable computers to recognize human emotions from physiological signals
EURASIP Journal on Applied Signal Processing
Hi-index | 0.00 |
There is a need for speech synthesis to be more emotionally expressive. Implicit control of a subset of affective vocal effects could be advantageous for some applications. Physiological measures associated with autonomic nervous system (ANS) activity are potential candidates for such input. This paper describes a pilot study investigating physiological sensor readings as potential input signals for modulating the speech synthesis of affective utterances composed by human users. A small corpus of audio, heart rate, and skin conductance data has been collected from eight doctoral student oral defenses. Planned analysis and research phases are outlined.