Letters: Learning spatiotemporal features by using independent component analysis with application to facial expression recognition

  • Authors:
  • Fei Long;Tingfan Wu;Javier R. Movellan;Marian S. Bartlett;Gwen Littlewort

  • Affiliations:
  • Software School of Xiamen University, Xiamen, 361005, China;Institute for Neural Computation, University of California, San Diego, CA 92093, USA;Institute for Neural Computation, University of California, San Diego, CA 92093, USA;Institute for Neural Computation, University of California, San Diego, CA 92093, USA;Institute for Neural Computation, University of California, San Diego, CA 92093, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Engineered features have been heavily employed in computer vision. Recently, feature learning from unlabeled data for improving the performance of a given vision task has received increasing attention in both machine learning and computer vision. In this paper, we present using unlabeled video data to learn spatiotemporal features for video classification tasks. Specifically, we employ independent component analysis (ICA) to learn spatiotemporal filters from natural videos, and then construct feature representations for the input videos in classification tasks based on the learned filters. We test the performance of proposed feature learning method with application to facial expression recognition. The experimental results on the well-known Cohn-Kanade database show that the learned features perform better than engineered features. The comparison experiments on recognition of low intensity expressions show that our method yields a better performance than spatiotemporal Gabor features.