Patterns of entry and correction in large vocabulary continuous speech recognition systems
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Device independent text input: a rationale and an example
AVI '00 Proceedings of the working conference on Advanced visual interfaces
Movement model, hits distribution and learning in virtual keyboarding
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Metrics for text entry research: an evaluation of MSD and KSPC, and a new unified error metric
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Phrase sets for evaluating text entry techniques
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Children's phrase set for text input method evaluations
Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles
Sampling representative phrase sets for text entry experiments: a procedure and public resource
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A versatile dataset for text entry evaluations based on genuine mobile emails
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
The potential of dwell-free eye-typing for fast assistive gaze communication
Proceedings of the Symposium on Eye Tracking Research and Applications
The word-gesture keyboard: reimagining keyboard interaction
Communications of the ACM
Touch behavior with different postures on soft smartphone keyboards
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Grand challenges in text entry
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Improving two-thumb text entry on touchscreen devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Complementing text entry evaluations with a composition task
ACM Transactions on Computer-Human Interaction (TOCHI)
Hi-index | 0.02 |
We empirically compare five different publicly-available phrase sets in two large-scale (N = 225 and N = 150) crowdsourced text entry experiments. We also investigate the impact of asking participants to memorize phrases before writing them versus allowing participants to see the phrase during text entry. We find that asking participants to memorize phrases increases entry rates at the cost of slightly increased error rates. This holds for both a familiar and for an unfamiliar text entry method. We find statistically significant differences between some of the phrase sets in terms of both entry and error rates. Based on our data, we arrive at a set of recommendations for choosing suitable phrase sets for text entry evaluations.