A temporal latent topic model for facial expression recognition

  • Authors:
  • Lifeng Shang;Kwok-Ping Chan

  • Affiliations:
  • The University of Hong Kong, Pokfulam, Hong Kong;The University of Hong Kong, Pokfulam, Hong Kong

  • Venue:
  • ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part IV
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we extend the latent Dirichlet allocation (LDA) topic model to model facial expression dynamics. Our topic model integrates the temporal information of image sequences through redefining topic generation probability without involving new latent variables or increasing inference difficulties. A collapsed Gibbs sampler is derived for batch learning with labeled training dataset and an efficient learning method for testing data is also discussed. We describe the resulting temporal latent topic model (TLTM) in detail and show how it can be applied to facial expression recognition. Experiments on CMU expression database illustrate that the proposed TLTM is very efficient in facial expression recognition.