The keystroke-level model for user performance time with interactive systems
Communications of the ACM
Evaluation of computer text editors
Evaluation of computer text editors
The evaluation of text editors: methodology and empirical results.
Communications of the ACM
Details of command-language keystrokes
ACM Transactions on Information Systems (TOIS)
A comparative study of moded and modeless text editing by experienced editor users
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Soundtrack: an auditory interface for blind users
Human-Computer Interaction
Does the medium make a difference? two studies of writing with pen and paper and with computers
Human-Computer Interaction
Usability Testing Essentials: Ready, Set...Test!
Usability Testing Essentials: Ready, Set...Test!
Changing perspectives on evaluation in HCI: past, present, and future
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Gestures and widgets: performance in text editing on multi-touch capable mobile devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.02 |
This paper presents a methodology for evaluating computer text editors from the viewpoint of their users—from novices learning the editor to dedicated experts who have mastered the editor. The dimensions which this methodology addresses are: —Time to perform edit tasks by experts. —Errors made by experts. —Learning of basic edit tasks by novices. —Functionality over all possible edit tasks. The methodology is objective and thorough, yet easy to use. The criterion of objectivity implies that the evaluation scheme not be biased in favor of any particular editor's conceptual model—its way of representing text and operations on the text. In addition, data is gathered by observing people who are equally familiar with each system. Thoroughness implies that several different aspects of editor usage be considered. Ease-of-use means that methodology is usable by editor designers, managers of word processing centers, or other non-psychologists who need this kind of information, but have limited time and equipment resources. In this paper, we explain the methodology first, then give some interesting empirical results from applying it to several editors.