Affective computing
Bimodal expression of emotion by face and voice
MULTIMEDIA '98 Proceedings of the sixth ACM international conference on Multimedia: Face/gesture recognition and their applications
Life-like Communication Agent -Emotion Sensing Character "MIC" & Feeling Session Character "MUSE"-
ICMCS '96 Proceedings of the 1996 International Conference on Multimedia Computing and Systems
Hi-index | 0.00 |
In this paper, a speech based emotion classification method is presented. Five basic human emotions including anger, fear, happiness, sadness and neutral are investigated. This paper explores use of Adaptive Neuro-Fuzzy Inference System (ANFIS) to design a classifier that can discriminate between various emotions. The results found to be are significant, both in cognitive science and in speech technology. For emotion recognition, we selected statistics of the pitch like, first and second formants, and Energy and speaking rate as the base features. ANFIS based recognizer is created. Ensembles of such recognizer are used as an important part of decision support system for prioritizing voice messages and assigning a proper agent to response the message. The recognition of emotion in human speech has gained increasing attention in recent years due to the wide variety of applications that benefit from such technology such as emotional robot or Computer system.