Human gesture recognition using 3.5-dimensional trajectory features for hands-free user interface

  • Authors:
  • Masaki Takahashi;Mahito Fujii;Masahide Naemura;Shin'ichi Satoh

  • Affiliations:
  • Japan Broadcasting Corporation (NHK) Science and Technology Research Laboratories, Setagaya-ku, Tokyo, Japan;Japan Broadcasting Corporation (NHK) Science and Technology Research Laboratories, Setagaya-ku, Tokyo, Japan;Japan Broadcasting Corporation (NHK) Science and Technology Research Laboratories, Setagaya-ku, Tokyo, Japan;National Institute of Informatics, Chiyoda-ku, Tokyo, Japan

  • Venue:
  • Proceedings of the first ACM international workshop on Analysis and retrieval of tracked events and motion in imagery streams
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new human motion recognition technique for a hands-free user interface. Although many motion recognition technologies for video sequences have been reported, no man-machine interface that recognizes enough variety of motions has been developed. The difficulty was the lack of spatial information that could be acquired from video sequences captured by a normal camera. The proposed system uses a depth image in addition to a normal grayscale image from a time-of-flight camera that measures the depth to objects, so various motions are accurately recognized. The main functions of this system are gesture recognition and posture measurement. The former is performed using the bag-of-words approach. The trajectories of tracked key points around the human body are used as features in this approach. The main technical contribution of the proposed method is the use of 3.5D spatiotemporal trajectory features, which contain horizontal, vertical, time, and depth information. The latter is obtained through face detection and object tracking technology. The proposed user interface is useful and natural because it does not require any contact-type devices, such as a motion sensor controller. The effectiveness of the proposed 3.5D spatiotemporal features was confirmed through a comparative experiment with conventional 3.0D spatiotemporal features. The generality of the system was proven by an experiment with multiple people. The usefulness of the system as a pointing device was also proven by a practical simulation.