Usability remote evaluation for WWW
CHI '00 Extended Abstracts on Human Factors in Computing Systems
Heuristic evaluation of programming language features: two parallel programming case studies
Proceedings of the 3rd ACM SIGPLAN workshop on Evaluation and usability of programming languages and tools
Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications
ACM Transactions on Computer-Human Interaction (TOCHI)
Hi-index | 0.00 |
The issue here is not whether discount techniques should be used; they are inevitable. The issue is, in trying to do the best job you can with the ridiculously limited resources provided you, what should you do? How confident should you be in the techniques you are using? A bad design may come back and bite you. When you choose a technique to use in a hurry, you are placing your professional reputation and perhaps your job on the line. You deserve to know four things about any technique that you apply. The hit rate: How many real problems will this technique uncover? The false-alarm rate: How many (and what sorts) of things will it falsely identify as problems (that may not exist, but are costly and time consuming to “fix”)? What does it miss? What types of problems (and how many) does this technique not discover? The correct rejections: How confident are you in your discount technique's ability to flag problems? Discount techniques are not a substitute for the potent combination of analytic and empirical methodologies that usability professionals can bring to bear in designing and evaluating an interface