Reducing visual demand for gestural text input on touchscreen devices

  • Authors:
  • Scott MacKenzie;Steven Castellucci

  • Affiliations:
  • York University, Toronto, Ontario, Canada;York University, Toronto, Ontario, Canada

  • Venue:
  • CHI '12 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We developed a text entry method for touchscreen devices using a Graffiti-like alphabet combined with automatic error correction. The method is novel in that the user does not receive the results of the recognition process, except at the end of a phrase. The method is justified over soft keyboards in terms of a Frame Model of Visual Attention, which reveals both the presence and advantage of reduced visual attention. With less on-going feedback to monitor, there is a tendency for the user to enter gestures more quickly. Preliminary testing reveals reasonably quick text entry speeds (20 wpm) with low errors rates (