Disambiguation of imprecise input with one-dimensional rotational text entry

  • Authors:
  • William S. Walmsley;W. Xavier Snelgrove;Khai N. Truong

  • Affiliations:
  • University of Toronto, Ontario, Canada;University of Toronto, Ontario, Canada;University of Toronto, Ontario, Canada

  • Venue:
  • ACM Transactions on Computer-Human Interaction (TOCHI)
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce a distinction between disambiguation supporting continuous versus discrete ambiguous text entry. With continuous ambiguous text entry methods, letter selections are treated as ambiguous due to expected imprecision rather than due to discretized letter groupings. We investigate the simple case of a one-dimensional character layout to demonstrate the potential of techniques designed for imprecise entry. Our rotation-based sight-free technique, Rotext, maps device orientation to a layout optimized for disambiguation, motor efficiency, and learnability. We also present an audio feedback system for efficient selection of disambiguated word candidates and explore the role that time spent acknowledging word-level feedback plays in text entry performance. Through a user study, we show that despite missing on average by 2.46--2.92 character positions, with the aid of a maximum a posteriori (MAP) disambiguation algorithm, users can average a sight-free entry speed of 12.6wpm with 98.9% accuracy within 13 sessions (4.3 hours). In a second study, expert users are found to reach 21wpm with 99.6% accuracy after session 20 (6.7 hours) and continue to grow in performance, with individual phrases entered at up to 37wpm. A final study revisits the learnability of the optimized layout. Our modeling of ultimate performance indicates maximum overall sight-free entry speeds of 29.0wpm with audio feedback, or 40.7wpm if an expert user could operate without relying on audio feedback.