Heuristic evaluation of user interfaces
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
User interface evaluation in the real world: a comparison of four techniques
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
What is gained and lost when using evaluation methods other than empirical testing
HCI'92 Proceedings of the conference on People and computers VII
Usability inspection methods
The cognitive walkthrough method: a practitioner's guide
Usability inspection methods
Usability problem reports: helping evaluators communicate effectively with developers
Usability inspection methods
The usability engineering lifecycle: a practitioner's handbook for user interface design
The usability engineering lifecycle: a practitioner's handbook for user interface design
A toolkit for strategic usability: results from workshops, panels, and surveys
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Usability Engineering
Interaction Design
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests
A Practical Guide to Usability Testing
A Practical Guide to Usability Testing
On the reliability of usability testing
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Testing web sites: five users is nowhere near enough
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Comparative usability evaluation
Behaviour & Information Technology
Comparing usability problems and redesign proposals as input to practical systems development
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Damaged merchandise? a review of experiments that compare usability evaluation methods
Human-Computer Interaction
Comparison of techniques for matching of usability problem descriptions
Interacting with Computers
Expert review method in game evaluations: comparison of two playability heuristic sets
Proceedings of the 13th International MindTrek Conference: Everyday Life in the Ubiquitous Era
Exploring multiple usability perspectives
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction
CS expertise for institutional review boards
Communications of the ACM
Comparison of playtesting and expert review methods in mobile game evaluation
Proceedings of the 3rd International Conference on Fun and Games
SAICSIT '10 Proceedings of the 2010 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Sample size in usability studies
Communications of the ACM
Usability reporting with UsabML
HCSE'12 Proceedings of the 4th international conference on Human-Centered Software Engineering
Do usability evaluators do what we think usability evaluators do?
Communication Design Quarterly Review
Changing perspectives on evaluation in HCI: past, present, and future
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Journal of Biomedical Informatics
A new proposal for improving heuristic evaluation reports performed by novice evaluators
Proceedings of the 2013 Chilean Conference on Human - Computer Interaction
Game based early programming education: the more you play, the more you learn
Transactions on Edutainment IX
Hi-index | 0.03 |
This paper reports on the approach and main results of CUE-4, the fourth in a series of Comparative Usability Evaluation studies. A total of 17 experienced professional teams independently evaluated the usability of the website for the Hotel Pennsylvania in New York. Nine teams used usability testing while eight teams used expert reviews. The CUE-4 results document a wide difference in resources applied and issues reported. The teams reported 340 different usability issues. Only nine of these issues were reported by more than half of the teams, while 205 issues (60%) were uniquely reported, that is, no two teams reported the same issue. A total of 61 of the 205 uniquely reported issues were classified as serious or critical problems. The study also shows that there was no practical difference between the results obtained from usability testing and expert reviews for the issues identified. It was not possible to prove the existence of either missed problems or false alarms in expert reviews. The paper further discusses quality measures for usability evaluation productivity.