Moments and wavelets for classification of human gestures represented by spatio-temporal templates

  • Authors:
  • Arun Sharma;Dinesh K. Kumar

  • Affiliations:
  • School of ECE, RMIT University, Melbourne, Australia;School of ECE, RMIT University, Melbourne, Australia

  • Venue:
  • AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper reports a novel technique to classify short duration articulated object motion in video data The motion is represented by a spatio-temporal template (STT), a view based approach, which collapses temporal component into a static grey scale image in a way that no explicit sequence matching or temporal analysis is needed, and characterizes the motion from a very high dimensional space to a low dimensional space These templates are modified to be invariant to translation and scale Two dimensional, 3 level dyadic wavelet transform applied on these templates results in one low pass subimage and nine highpass directional subimages Histograms of STTs and of the wavelet coefficients at different scales are compared to establish significance of available information for classification To further reduce the feature space, histograms of STTs are represented by orthogonal Legendre moments, and the wavelet subbands are modelled by generalized Gaussian density (GGD) parameters – shape factor and standard deviation The preliminary experiments show that directional information in wavelet subbands improves the histogram-based technique, and that use of moments combined with GGD parameters improves the performance efficiency in addition to significantly reducing complexity of comparing directly the histograms.