The implementation of the emotion recognition from speech and facial expression system

  • Authors:
  • Chang-Hyun Park;Kwang-Sub Byun;Kwee-Bo Sim

  • Affiliations:
  • School of Electrical and Electronic Engineering, Chung-Ang University, Seoul, Korea;School of Electrical and Electronic Engineering, Chung-Ang University, Seoul, Korea;School of Electrical and Electronic Engineering, Chung-Ang University, Seoul, Korea

  • Venue:
  • ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we introduce a system that recognize emotion by speech and show the facial expression by using 2-dimensional emotion space. 4 emotional states are classified by the work with ANN. The derived features of the signal, pitch, and loudness are quantitatively contributed to the classification of emotions. Firstly we analyze the acoustic elements for using as emotional features and the elements are evaluated by ANN classifier. Secondly, we implement an avatar (simply drawn face) and the facial expressions are changed naturally by using the dynamic emotion space model.