Accurate and efficient gesture spotting via pruning and subgesture reasoning

  • Authors:
  • Jonathan Alon;Vassilis Athitsos;Stan Sclaroff

  • Affiliations:
  • Computer Science Department, Boston University, Boston, MA;Computer Science Department, Boston University, Boston, MA;Computer Science Department, Boston University, Boston, MA

  • Venue:
  • ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.04

Visualization

Abstract

Gesture spotting is the challenging task of locating the start and end frames of the video stream that correspond to a gesture of interest, while at the same time rejecting non-gesture motion patterns. This paper proposes a new gesture spotting and recognition algorithm that is based on the continuous dynamic programming (CDP) algorithm, and runs in real-time. To make gesture spotting efficient a pruning method is proposed that allows the system to evaluate a relatively small number of hypotheses compared to CDP. Pruning is implemented by a set of model-dependent classifiers, that are learned from training examples. To make gesture spotting more accurate a subgesture reasoning process is proposed that models the fact that some gesture models can falsely match parts of other longer gestures. In our experiments, the proposed method with pruning and subgesture modeling is an order of magnitude faster and 18% more accurate compared to the original CDP algorithm.