Heuristic evaluation of user interfaces
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Usability inspection methods
Software for use: a practical guide to the models and methods of usage-centered design
Software for use: a practical guide to the models and methods of usage-centered design
Usability Engineering
Homepage Usability: 50 Websites Deconstructed
Homepage Usability: 50 Websites Deconstructed
A Practical Guide to Usability Testing
A Practical Guide to Usability Testing
Software Engineering: A Practitioner's Approach
Software Engineering: A Practitioner's Approach
Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th Edition)
Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th Edition)
Heuristic evaluations at bell labs: analyses of evaluator overlap and group session
CHI '07 Extended Abstracts on Human Factors in Computing Systems
A comparative study of two usability evaluation methods using a web-based e-learning application
Proceedings of the 2007 annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries
Information and Software Technology
Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics
Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics
The coffee lab: developing a public usability space
CHI '10 Extended Abstracts on Human Factors in Computing Systems
OPEN-HEREDEUX: open heuristic resource for designing and evaluating user experience
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part IV
Hi-index | 0.00 |
In usability context of interactive systems the heuristic evaluation method is widespread. In most applications the results tend to be qualitative, describing such aspects that require some improvement for the benefit of usability. However, these qualitative results do not allow us to determine how usable it is or how it becomes an interactive system. Hence the need for quantitative results may also be very necessary in order to determine the effort that would be needed to get a sufficiently usable system. This article describes, following the idea of the UsabAIPO Project, a new experiment to obtain quantitative results after a heuristic evaluation. This new experimentation has required some variation on the original idea, working with a set of different heuristic categories, while considering the use of the score depending on severity and frequency parameters.