A psychologically-inspired match-score fusion mode for video-based facial expression recognition

  • Authors:
  • Albert Cruz;Bir Bhanu;Songfan Yang

  • Affiliations:
  • Center for Research in Intelligent Systems, University of California, Riverside, California;Center for Research in Intelligent Systems, University of California, Riverside, California;Center for Research in Intelligent Systems, University of California, Riverside, California

  • Venue:
  • ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Communication between humans is complex and is not limited to verbal signals; emotions are conveyed with gesture, pose and facial expression. Facial Emotion Recognition and Analysis (FERA), the techniques by which non-verbal communication is quantified, is an exemplar case where humans consistently outperform computer methods. While the field of FERA has seen many advances, no system has been proposed which scales well to very large data sets. The challenge for computer vision is how to automatically and nonheuristically downsample the data while maintaining the maximum representational power that does not sacrifice accuracy. In this paper, we propose a method inspired by human vision and attention theory [2]. Video is segmented into temporal partitions with a dynamic sampling rate based on the frequency of visual information. Regions are homogenized by a match-score fusion technique. The approach is shown to provide classification rates higher than the baseline on the AVEC 2011 video-subchallenge dataset [15].