Latent Pose Estimator for Continuous Action Recognition

  • Authors:
  • Huazhong Ning;Wei Xu;Yihong Gong;Thomas Huang

  • Affiliations:
  • ECE, U. of Illinois at Urbana-Champaign, USA;NEC Laboratories America, Inc., USA;NEC Laboratories America, Inc., USA;ECE, U. of Illinois at Urbana-Champaign, USA

  • Venue:
  • ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, models based on conditional random fields (CRF) have produced promising results on labeling sequential data in several scientific fields. However, in the vision task of continuous action recognition, the observations of visual features have dimensions as high as hundreds or even thousands. This might pose severe difficulties on parameter estimation and even degrade the performance. To bridge the gap between the high dimensional observations and the random fields, we propose a novel model that replace the observation layer of a traditional random fields model with a latent pose estimator. In training stage, the human pose is not observed in the action data, and the latent pose estimator is learned under the supervision of the labeled action data, instead of image-to-pose data. The advantage of this model is twofold. First, it learns to convert the high dimensional observations into more compact and informative representations. Second, it enables transfer learning to fully utilize the existing knowledge and data on image-to-pose relationship. The parameters of the latent pose estimator and the random fields are jointly optimized through a gradient ascent algorithm. Our approach is tested on HumanEva [1] --- a publicly available dataset. The experiments show that our approach can improve recognition accuracy over standard CRF model and its variations. The performance can be further significantly improved by using additional image-to-pose data for training. Our experiments also show that the model trained on HumanEva can generalize to different environment and human subjects.