Designing for usability: key principles and what designers think
Communications of the ACM
The 1984 Olympic Message System: a test of behavioral principles of system design
Communications of the ACM
Proceedings of the fifth conference of the British Computer Society, Human-Computer Interaction Specialist Group on People and computers V
Human factors testing in the design of Xerox's 8010 “Star” office workstation
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Comparison of empirical testing and walkthrough methods in user interface evaluation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Getting access to what goes on in people's heads?: reflections on the think-aloud technique
Proceedings of the second Nordic conference on Human-computer interaction
Evaluating Electronic Textbooks: A Methodology
ECDL '01 Proceedings of the 5th European Conference on Research and Advanced Technology for Digital Libraries
Watch-and-comment as a paradigm toward ubiquitous interactive video editing
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
A test-first view of usability
Interacting with Computers
Interaction walkthrough: evaluation of safety critical interactive systems
DSVIS'06 Proceedings of the 13th international conference on Interactive systems: Design, specification, and verification
Proceedings of the 2011 ACM Symposium on Applied Computing
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: applications and services - Volume Part IV
Think aloud: effects and validity
Proceedings of the 30th ACM international conference on Design of communication
Proceedings of the 24th Australian Computer-Human Interaction Conference
Hi-index | 0.00 |
This paper reports on two studies in which teams of two or three trainee designers evaluated a user interface by observing a user working through some set tasks. These users were instructed to think aloud as they worked. The instruction received by the designers took the form of a brief how-to-do-it manual. Study 1 demonstrates that this method is effective. Study 2 found that more problems were detected by the designers of the system than other groups. Also, designers cannot predict the problems users will experience in advance of user testing.