Comparison of empirical testing and walkthrough methods in user interface evaluation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Usability inspection methods
Evaluating a multimedia authoring tool
Journal of the American Society for Information Science - Special issue on current research in human-computer interaction
Analysis of combinatorial user effect in international usability tests
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Usability problem description and the evaluator effect in usability testing
Usability problem description and the evaluator effect in usability testing
Supporting novice usability practitioners with usability engineering tools
Supporting novice usability practitioners with usability engineering tools
Sharing Usability Problem Sets within and between Groups
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
Analysis in usability evaluations: an exploratory study
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries
The Damage Index: an aggregation tool for usability problem prioritisation
BCS '10 Proceedings of the 24th BCS Interaction Specialist Group Conference
Designing the anti-heuristic game: a game which violates heuristics
Proceedings of the 11th International Conference on Interaction Design and Children
Hi-index | 0.00 |
The process of consolidating usability problems (UPs) is an integral part of usability evaluation involving multiple users/analysts. However, little is known about the mechanism of this process and its effects on evaluation outcomes, which presumably influence how developers redesign the system of interest. We conducted an exploratory research study with ten novice evaluators to examine how they performed when merging UPs in the individual and collaborative setting and how they drew consensus. Our findings indicate that collaborative merging causes the absolute number of UPs to deflate, and concomitantly the frequency of certain UP types as well as their severity ratings to inflate excessively. It can be attributed to the susceptibility of novice evaluators to persuasion in a negotiation setting, and thus they tended to aggregate UPs leniently. Such distorted UP attributes may mislead the prioritization of UPs for fixing and thus result in ineffective system redesign.