Gesture keyboard with a machine learning requiring only one camera

  • Authors:
  • Taichi Murase;Atsunori Moteki;Genta Suzuki;Takahiro Nakai;Nobuyuki Hara;Takahiro Matsuda

  • Affiliations:
  • Fujitsu Laboratories Ltd., Nakahara-ku, Kawasaki, Kanagawa, Japan;Fujitsu Laboratories Ltd., Nakahara-ku, Kawasaki, Kanagawa, Japan;Fujitsu Laboratories Ltd., Nakahara-ku, Kawasaki, Kanagawa, Japan;Fujitsu Laboratories Ltd., Nakahara-ku, Kawasaki, Kanagawa, Japan;Fujitsu Laboratories Ltd., Nakahara-ku, Kawasaki, Kanagawa, Japan;Fujitsu Laboratories Ltd., Nakahara-ku, Kawasaki, Kanagawa, Japan

  • Venue:
  • AH '12 Proceedings of the 3rd Augmented Human International Conference
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, the authors propose a novel gesture-based virtual keyboard (Gesture Keyboard) that uses a standard QWERTY keyboard layout, and requires only one camera, and employs a machine learning technique. Gesture Keyboard tracks the user's fingers and recognizes finger motions to judge keys input in the horizontal direction. Real-Adaboost (Adaptive Boosting), a machine learning technique, uses HOG (Histograms of Oriented Gradients) features in an image of the user's hands to estimate keys in the depth direction. Each virtual key follows a corresponding finger, so it is possible to input characters at the user's preferred hand position even if the user displaces his hands while inputting data. Additionally, because Gesture Keyboard requires only one camera, keyboard-less devices can implement this system easily. We show the effectiveness of utilizing a machine learning technique for estimating depth.