Efficient detection of consecutive facial expression apices using biologically based log-normal filters

  • Authors:
  • Zakia Hammal

  • Affiliations:
  • The Robotics Institute of the Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • ISVC'11 Proceedings of the 7th international conference on Advances in visual computing - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automatic extraction of the most relevant information in a video sequence made of continuous affective states is an important challenge for efficient human-machine interaction systems. In this paper a method is proposed to solve this problem based on two steps: first, the automatic segmentation of consecutive emotional segments based on the response of a set of Log-Normal filters; secondly, the automatic detection of the facial expression apices based on the estimation of the global face energy inside each emotional segment independently of the undergoing facial expression. The proposed method is fully automatic and independent from any reference image such as the neutral at the beginning of the sequence. The proposed method is the first contribution for the summary of the most important affective information present in a video sequence independently of the undergoing facial expressions. The robustness and efficiency of the proposed method to different data acquisition and facial differences has been evaluated on a large set of data (157 video sequences) taken from two benchmark databases (Hammal-Caplier and MMI databases) [1, 2] and from 20 recorded video sequences of multiple facial expressions (between three to seven facial expressions per sequence) in order to include more challenging image data in which expressions are not neatly packaged in neutral-expression-neutral.