Employing fujisaki's intonation model parameters for emotion recognition

  • Authors:
  • Panagiotis Zervas;Iosif Mporas;Nikos Fakotakis;George Kokkinakis

  • Affiliations:
  • Wire Communication Laboratory, Electrical and Computer Engineering Dept., University of Patras, Rion, Patras, Greece;Wire Communication Laboratory, Electrical and Computer Engineering Dept., University of Patras, Rion, Patras, Greece;Wire Communication Laboratory, Electrical and Computer Engineering Dept., University of Patras, Rion, Patras, Greece;Wire Communication Laboratory, Electrical and Computer Engineering Dept., University of Patras, Rion, Patras, Greece

  • Venue:
  • SETN'06 Proceedings of the 4th Helenic conference on Advances in Artificial Intelligence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we are introducing the employment of features extracted from Fujisaki's parameterization of pitch contour for the task of emotion recognition from speech. In evaluating the proposed features we have trained a decision tree inducer as well as the instance based learning algorithm. The datasets utilized for training the classification models, were extracted from two emotional speech databases. Fujisaki's parameters benefited all prediction models with an average raise of 9,52% in the total accuracy.