Plans and situated actions: the problem of human-machine communication
Plans and situated actions: the problem of human-machine communication
Improving a human-computer dialogue
Communications of the ACM
User interface evaluation in the real world: a comparison of four techniques
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
Comparison of empirical testing and walkthrough methods in user interface evaluation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A mathematical model of the finding of usability problems
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Usability inspection methods
Faster, cheaper!! Are usability inspection methods as effective as empirical testing?
Usability inspection methods
Enhancing the explanatory power of usability heuristics
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Cognitive engineering principles for enhancing human-computer performance
International Journal of Human-Computer Interaction
Designing the User Interface: Strategies for Effective Human-Computer Interaction
Designing the User Interface: Strategies for Effective Human-Computer Interaction
Testing web sites: five users is nowhere near enough
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Introduction to this special issue on experimental comparisons of usability evaluation methods
Human-Computer Interaction
Damaged merchandise? a review of experiments that compare usability evaluation methods
Human-Computer Interaction
On "Technomethodologyn";: foundational relationships between ethnomethodology and system design
Human-Computer Interaction
Analysis of combinatorial user effect in international usability tests
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Analysis of strategies for improving and estimating the effectiveness of heuristic evaluation
Proceedings of the third Nordic conference on Human-computer interaction
Comparing accessibility evaluation and usability evaluation in HagáQuê
CLIHC '05 Proceedings of the 2005 Latin American conference on Human-computer interaction
How HCI-practitioners want to evaluate their own practice
Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles
A comparative study of two usability evaluation methods using a web-based e-learning application
Proceedings of the 2007 annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries
Scenarios in the Heuristic Evaluation of Mobile Devices: Emphasizing the Context of Use
HCD 09 Proceedings of the 1st International Conference on Human Centered Design: Held as Part of HCI International 2009
Number of people required for usability evaluation: the 10±2 rule
Communications of the ACM
Do patterns help novice evaluators? A comparative study
International Journal of Human-Computer Studies
Tourism Mobile Application Usability: The Case of iTicino
International Journal of E-Services and Mobile Applications
Journal of Systems and Software
Hi-index | 0.02 |
The aim of this paper is twofold (i) comparing the effectiveness of two evaluation methods, namely heuristic evaluation and usability testing, as applied to an experimental version of the UNIVERSAL Brokerage Platform (UBP), and (ii) inferring implications from the empirical findings of the usability test. Eight claims derived from previous research works are reviewed with the data of the current study. While the complementarity and convergence of the results yielded by the two methods can be confirmed to a certain extent, no conclusive explication about their divergence can be obtained, especially the issue whether usability problems reported lead to failures in real use. One of the significant implications thus drawn is to conduct meta-analysis on a sufficient number of well-designed and professionally performed empirical works on usability evaluation methods.