Facial emotion classification using active appearance model and support vector machine classifier

  • Authors:
  • Marian Beszédeš;Phil Culverhouse;Miloš Oravec

  • Affiliations:
  • Dept. of Telecommunications, FEI STU Bratislava, Bratislava, Slovakia;Centre for Interactive Intelligent Systems SoCCE, University of Plymouth, Plymouth, United Kingdom;Dept. of Telecommunications, FEI STU Bratislava, Bratislava, Slovakia

  • Venue:
  • Machine Graphics & Vision International Journal
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic analysis of human face expression is an interesting and non-trivial problem. In the last decade, many approaches have been described for emotion recognition based on analysis of facial expression. However, little has been done in the sub-area of the recognition of facial emotion intensity levels. This paper proposes the analysis of the use of Active Appearance Models (AAMs) and Support Vector Machine (SVM) classifiers in the recognition of human facial emotion and emotion intensity levels. AAMs are known as a tool for statistical modeling of object shape/appearance or for precise object feature detection. In our case, we examine their properties as a technique for feature extraction. We analyze the influence of various facial feature data types (shape/texture/combined AAM parameter vectors) and the size of facial images on the final classification accuracy. Then, approaches to proper C-SVM classifiers (RBF kernel) training parameter adjustment are described. Moreover, an alternative way of classification accuracy evaluation using the human visual system as a reference point is discussed. Unlike the usual to the approach evaluation of recognition algorithms (based on comparison of final classification accuracies), the proposed evaluation schema is independent of the testing set parameters, such as number, age and gender of subjects or the intensity of their emotions. Finally, we show that our automatic system gives emotion categories for images more consistent labels than human subjects, while humans are more consistent in identifying emotion intensity level compared to our system.