Translation and scale-invariant gesture recognition in complex scenes

  • Authors:
  • Alexandra Stefan;Vassilis Athitsos;Jonathan Alon;Stan Sclaroff

  • Affiliations:
  • Boston University;University of Texas at Arlington;Boston University;Boston University

  • Venue:
  • Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gestures are a natural means of communication between humans, and also a natural modality for human-computer interaction. Automatic recognition of gestures using computer vision is an important task in many real-world applications, such as sign language recognition, computer games control, virtual reality, intelligent homes, and assistive environments. In order for a gesture recognition system to be robust and deployable in non-laboratory settings, the system needs to be able to operate in complex scenes, with complicated backgrounds and multiple moving and skin-colored objects. In this paper we propose an approach for improving gesture recognition performance in such complex environments. The key idea is to integrate a face detection module into the gesture recognition system, and use the face location and size to make gesture recognition invariant to scale and translation. Our experiments demonstrate the significant advantages of the proposed method over alternative computer vision methods for gesture recognition.