A multimodal approach for online estimation of subtle facial expression

  • Authors:
  • Xiaohong Xiang;Mohan S. Kankanhalli

  • Affiliations:
  • School Of Computing, National University Of Singapore, Singapore;School Of Computing, National University Of Singapore, Singapore

  • Venue:
  • PCM'12 Proceedings of the 13th Pacific-Rim conference on Advances in Multimedia Information Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recognizing subtle emotional expression of human is a challenging and interesting problem in the field of human computer interaction. Multimodality is a prospective way to help solve this problem. Therefore, in this paper, we first take advantage of a novel "sparse representation" approach to compute the matching degree of current facial expression to each basic emotion class. Concurrently, we also use an eye tracker to obtain the instant pupillary response, which gives us clues to the subtle emotion. We combine the results of facial expression and pupillary information, take into account the previous emotional state to classify the current subtle emotional expression. Finally, a Markov Model is used to compute a directed graph to model the changes of human's emotion. The experimental results show that: First, the sparse representation has a good classification rate on facial expression; Second, the fusion of facial expression, pupillary size and previous emotional state is a promising strategy for analyzing subtle expression.