Analysis of composite gestures with a coherent probabilistic graphical model

  • Authors:
  • J. Corso;Guangqi Ye;D. Hager

  • Affiliations:
  • Computational Interaction and Robotics Lab, The Johns Hopkins University, 21218, Baltimore, MD, USA;Computational Interaction and Robotics Lab, The Johns Hopkins University, 21218, Baltimore, MD, USA;Computational Interaction and Robotics Lab, The Johns Hopkins University, 21218, Baltimore, MD, USA

  • Venue:
  • Virtual Reality
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditionally, gesture-based interaction in virtual environments is composed of either static, posture-based gesture primitives or temporally analyzed dynamic primitives. However, it would be ideal to incorporate both static and dynamic gestures to fully utilize the potential of gesture-based interaction. To that end, we propose a probabilistic framework that incorporates both static and dynamic gesture primitives. We call these primitives Gesture Words (GWords). Using a probabilistic graphical model (PGM), we integrate these heterogeneous GWords and a high-level language model in a coherent fashion. Composite gestures are represented as stochastic paths through the PGM. A gesture is analyzed by finding the path that maximizes the likelihood on the PGM with respect to the video sequence. To facilitate online computation, we propose a greedy algorithm for performing inference on the PGM. The parameters of the PGM can be learned via three different methods: supervised, unsupervised, and hybrid. We have implemented the PGM model for a gesture set of ten GWords with six composite gestures. The experimental results show that the PGM can accurately recognize composite gestures.