Towards the keyboard of oz: learning individual soft-keyboard models from raw optical sensor data

  • Authors:
  • Jörg Edelmann;Philipp Mock;Andreas Schilling;Peter Gerjets;Wolfgang Rosenstiel;Wolfgang Straßer

  • Affiliations:
  • Knowledge Media Research Center, Tübingen, Germany;University of Tübingen, Tübingen, Germany;University of Tübingen, Tübingen, Germany;Knowledge Media Research Center, Tübingen, Germany;University of Tübingen, Tübingen, Germany;University of Tübingen, Tübingen, Germany

  • Venue:
  • Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Typing on a touchscreen display usually lacks haptic feedback which is crucial for maintaining finger to key assignment, especially for touch typists who are not looking at their keyboard. This leads to typing being substantially more error prone on these devices. We present a soft keyboard model which we developed from typing data collected from users with diverging typing behavior. For data acquisition, we used a simulated perfect classifier we refer to as The Keyboard of Oz. In order to draw near to this classifier we used the complete sensor data of each keystroke and applied supervised machine learning techniques to learn and evaluate an individual keyboard model. The model not only accounts for individual keystroke distributions but also incorporates a classifier based on the images obtained from an optical touch sensor. The resulting highly individual classifier has remarkable classification accuracy. Additionally, we present an approach to compensate for hand drift during typing utilizing a Kalman filter. We show that this filter performs significantly better with the keyboard model which takes raw sensor data into account.