Emotion-aware assistive system for humanistic care based on the orange computing concept
Applied Computational Intelligence and Soft Computing - Special issue on Awareness Science and Engineering
Speech emotional features extraction based on electroglottograph
Neural Computation
Hi-index | 0.00 |
In order to study the relationship between emotion and intonation, a new technique is introduced for the extraction of the dominant pitches within speech utterances and the quasi-musical analysis of the multipitch structure. After the distribution of fundamental frequencies over the entire utterance has been obtained, the underlying pitch structure is determined using an unsupervised "cluster" (Gaussian mixtures) algorithm. The technique normally results in 3-6 pitch clusters per utterance that can then be evaluated in terms of their inherent dissonance, harmonic "tension", and "major or minor modality". Stronger dissonance and tension were found in utterances with negative affect, relative to utterances with positive affect. Most importantly, utterances that were evaluated as having positive or negative affect had significantly different modality values. Factor analysis showed that the measures involving multiple pitches were distinct from other acoustical measures, indicating that the pitch substructure is an independent factor contributing to the affective valence of speech prosody.