A user-independent real-time emotion recognition system for software agents in domestic environments
Engineering Applications of Artificial Intelligence
A Study of Emotion Recognition and Its Applications
MDAI '07 Proceedings of the 4th international conference on Modeling Decisions for Artificial Intelligence
Emotion recognition from speech via boosted Gaussian mixture models
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Affect-aware behaviour modelling and control inside an intelligent environment
Pervasive and Mobile Computing
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Study on emotional speech features in korean with its application to voice conversion
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
A study of speech emotion recognition and its application to mobile services
UIC'07 Proceedings of the 4th international conference on Ubiquitous Intelligence and Computing
Hi-index | 0.00 |
In this study, the system that is capable of both recognizing and synthesizing emotional content in speech is developed. At first, the relation information that relates the physical features of emotional speech to the emotional content perceived by listeners is estimated through linear statistical methods, and it is applied to the system. It realizes emotion recognition and synthesis just through easy linear operation using the relation information. In the system, the pitch contour is expressed by the model proposed by Fujisaki (7 parameters) and the power envelope is approximated by 5 line segments (11 parameters), and PSOLA is applied to synthesize the speech. The emotion words among which there is very little correlates were selected from the preliminary statistical experiments. The relation information was verified to be significant and from the result of the experiments, the system was able to recognize and synthesize emotional content in speech as subjects did.