Exploiting a Vowel Based Approach for Acted Emotion Recognition

  • Authors:
  • Fabien Ringeval;Mohamed Chetouani

  • Affiliations:
  • Université Pierre et Marie Curie --- Paris 6, Institut des Systèmes Intelligents et de Robotique, Ivry sur Seine, France 94200;Université Pierre et Marie Curie --- Paris 6, Institut des Systèmes Intelligents et de Robotique, Ivry sur Seine, France 94200

  • Venue:
  • Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper is dedicated to the description and the study of a new feature extraction approach for emotion recognition. Our contribution is based on the extraction and the characterization of phonemic units such as vowels and consonants, which are provided by a pseudo-phonetic speech segmentation phase combined with a vowel detector. The segmentation algorithm is evaluated on both emotional (Berlin) and non-emotional (TIMIT, NTIMIT) databases. Concerning the emotion recognition task, we propose to extract MFCC acoustic features from these pseudo-phonetic segments (vowels, consonants) and we compare this approach with traditional voice and unvoiced segments. The classification is achieved by the well-known k-nn classifier (k nearest neighbors) on the Berlin corpus.