Making gestural input from arm-worn inertial sensors more practical

  • Authors:
  • Louis Kratz;Daniel Morris;T. Scott Saponas

  • Affiliations:
  • Drexel University, Philadelphia, PA & Microsoft Research, Redmond, WA, USA;Microsoft Research, Redmond, Washington, United States;Microsoft Research, Redmond, Washington, United States

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Gestural input can greatly improve computing experiences away from the desktop, and has the potential to provide always-available access to computing. Specifically, accelerometers and gyroscopes worn on the arm (e.g., in a wristwatch) can sense arm gestures, enabling natural input in untethered scenarios. Two core components of any gesture recognition system are detecting when a gesture is occurring and classifying which gesture a person has performed. In previous work, accurate detection has required significant computation, and high-accuracy classification has come at the cost of training the system on a per-user basis. In this note, we present a gesture detection method whose computational complexity does not depend on the duration of the gesture, and describe a novel method for recognizing gestures with only a single example from a new user.