Usability testing in the real world
ACM SIGCHI Bulletin
Stimulating change through usability testing
ACM SIGCHI Bulletin
Using video in the BNR usability lab
ACM SIGCHI Bulletin
Usability problem reports: helping evaluators communicate effectively with developers
Usability inspection methods
Maintaining a focus on user requirements throughout the development of clinical workstation software
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Usability in practice: formative usability evaluations - evolution and revolution
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests
A Practical Guide to Usability Testing
A Practical Guide to Usability Testing
Describing usability problems: are we sending the right message?
interactions - All systems go: how Wall street will benefit from user-centered design
Comparing usability problems and redesign proposals as input to practical systems development
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The measurability and predictability of user experience
Proceedings of the 3rd ACM SIGCHI symposium on Engineering interactive computing systems
Usability problem identification in culturally diverse settings
Information Systems Journal
Hi-index | 0.00 |
Usability evaluation is widely accepted as a valuable activity in software development. However, how results effectively are fed back to developers is still a relatively unexplored area. We argue that usability feedback can be understood as an argument for a series of usability problems, and that basic concepts from argumentation theory can help us understand how to create persuasive feedback. We revisit two field studies on usability feedback to study if concepts from Toulmin's model for argumentation and Aristotle's modes of persuasion can explain why some feedback formats outperform others. We recommend that evaluators specifically back up the warrants behind their usability claims, that their arguments use several modes of persuasion, and that they present feedback in browsable amounts not to overwhelm developers with information. For complex and controversial problems, we advise evaluators to involve developers in a learning process and provide the opportunity to experience and discuss the findings.