Breaking the status quo: Improving 3D gesture recognition with spatially convenient input devices

  • Authors:
  • Michael Hoffman;Paul Varcholik;Joseph J. LaViola

  • Affiliations:
  • Univ. of Central Florida, Orlando, FL, USA;Univ. of Central Florida, Orlando, FL, USA;Univ. of Central Florida, Orlando, FL, USA

  • Venue:
  • VR '10 Proceedings of the 2010 IEEE Virtual Reality Conference
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a systematic study on the recognition of 3D gestures using spatially convenient input devices. Specifically, we examine the linear acceleration-sensing Nintendo Wii Remote coupled with the angular velocity-sensing Nintendo Wii MotionPlus. For the study, we created a 3D gesture database, collecting data on 25 distinct gestures totalling 8500 gestures samples. Our experiment explores how the number of gestures and the amount of gestures samples used to train two commonly used machine learning algorithms, a linear and AdaBoost classifier, affect overall recognition accuracy. We examined these gesture recognition algorithms with user dependent and user independent training approaches and explored the affect of using the Wii Remote with and without the Wii MotionPlus attachment. Our results show that in the user dependent case, both the Ad-aBoost and linear classification algorithms can recognize up to 25 gestures at over 90% accuracy, with 15 training samples per gesture, and up to 20 gestures at over 90% accuracy, with only five training samples per gesture. In particular, all 25 gestures could be recognized at over 99% accuracy with the linear classifier using 15 training samples per gesture, with the Wii Remote coupled with the Wii MotionPlus. In addition, both algorithms can recognize up to nine gestures at over 90% accuracy using a user independent training database with 100 samples per gesture. The Wii MotionPlus attachment played a significant role in improving accuracy in both the user dependent and independent cases.