Static topographic modeling for facial expression recognition and analysis

  • Authors:
  • Jun Wang;Lijun Yin

  • Affiliations:
  • Department of Computer Science, State University of New York at Binghamton, Binghamton, NY 13902, USA;Department of Computer Science, State University of New York at Binghamton, Binghamton, NY 13902, USA

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Facial expression plays a key role in non-verbal face-to-face communication. It is a challenging task to develop an automatic facial expression reading and understanding system, especially, for recognizing the facial expression from a static image without any prior knowledge of the test subject. In this paper, we present a topographic modeling approach to recognize and analyze facial expression from single static images. The so-called topographic modeling is developed based on a novel facial expression descriptor, Topographic Context (TC), for representing and recognizing facial expressions. This proposed approach applies topographic analysis that treats the image as a 3D surface and labels each pixel by its terrain features. Topographic context captures the distribution of terrain labels in the expressive regions of a face. It characterizes the distinct facial expression while conserving abundant expression information and disregarding most individual characteristics. Experiments on person-dependent and person-independent facial expression recognition using two public databases (MMI and Cohn-Kanade database) show that TC is a good feature representation for recognizing basic prototypic expressions. Furthermore, we conduct the separability analysis of TC-based features by both a visualized dimensionality reduction example and a theoretical estimation using certain separability criterion. For an in-depth understanding of the recognition property of different expressions, the between-expression discriminability is also quantitatively evaluated using the separability criterion. Finally, we investigated the robustness of the extracted TC-based expression features in two aspects: the robustness to the distortion of detected face region and the robustness to different intensities of facial expressions. The experimental results show that our system achieved the best correct rate at 82.61% for the person-independent facial expression recognition.