The design and evaluation of a high-performance soft keyboard
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Patterns of entry and correction in large vocabulary continuous speech recognition systems
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Device independent text input: a rationale and an example
AVI '00 Proceedings of the working conference on Advanced visual interfaces
Metrics for text entry research: an evaluation of MSD and KSPC, and a new unified error metric
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Phrase sets for evaluating text entry techniques
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Effect of foreign language on text transcription performance: Finns writing English
Proceedings of the third Nordic conference on Human-computer interaction
An empirical study of typing rates on mini-QWERTY keyboards
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Children's phrase set for text input method evaluations
Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Graffiti vs. unistrokes: an empirical comparison
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Experimental evaluations of the Twiddler one-handed chording mobile keyboard
Human-Computer Interaction
Parakeet: a continuous speech recognition system for mobile touch-screen devices
Proceedings of the 14th international conference on Intelligent user interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Human-computer interaction: A stable discipline, a nascent science, and the growth of the long tail
Interacting with Computers
Crowdsourcing graphical perception: using mechanical turk to assess visualization design
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Sampling representative phrase sets for text entry experiments: a procedure and public resource
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A versatile dataset for text entry evaluations based on genuine mobile emails
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Performance comparisons of phrase sets and presentation styles for text entry evaluations
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
The potential of dwell-free eye-typing for fast assistive gaze communication
Proceedings of the Symposium on Eye Tracking Research and Applications
Hi-index | 0.00 |
A common methodology for evaluating text entry methods is to ask participants to transcribe a predefined set of memorable sentences or phrases. In this article, we explore if we can complement the conventional transcription task with a more externally valid composition task. In a series of large-scale crowdsourced experiments, we found that participants could consistently and rapidly invent high quality and creative compositions with only modest reductions in entry rates. Based on our series of experiments, we provide a best-practice procedure for using composition tasks in text entry evaluations. This includes a judging protocol which can be performed either by the experimenters or by crowdsourced workers on a microtask market. We evaluated our composition task procedure using a text entry method unfamiliar to participants. Our empirical results show that the composition task can serve as a valid complementary text entry evaluation method.