Semantic quantization of 3D human motion capture data through spatial-temporal feature extraction

  • Authors:
  • Yohan Jin;B. Prabhakaran

  • Affiliations:
  • Department of Computer Science, University of Texas at Dallas, Richardson, Texas;Department of Computer Science, University of Texas at Dallas, Richardson, Texas

  • Venue:
  • MMM'08 Proceedings of the 14th international conference on Advances in multimedia modeling
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

3D motion capture is a form of multimedia data that is widely used in animation and medical fields (such as physical medicine and rehabilitation where body joint analysis is needed). These applications typically create large repositories of motion capture data and need efficient and accurate content-based retrieval techniques. 3D motion capture data is in the form of multi-dimensional time series data. To reduce the dimensions of human motion data while maintaining semantically important features, we quantize human motion data by extracting Spatial-Temporal Features through SVD and translate them onto a 1- dimensional sequential representation through our proposed sGMMEM (semantic Gaussian Mixture Modeling with EM). Thus, we achieve good classification accuracies for primitive human motion categories (walking 92.85%,run 91.42%,jump 94.11%) and even for subtle categories (dance 89.47%,laugh 83.33%,basketball signal 85.71%,golf putting 80.00%).