Heuristic evaluation of user interfaces
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
A mathematical model of the finding of usability problems
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Usability inspection methods
The Inmates Are Running the Asylum
The Inmates Are Running the Asylum
On the reliability of usability testing
CHI '01 Extended Abstracts on Human Factors in Computing Systems
The human-computer interaction handbook
Analysis of strategies for improving and estimating the effectiveness of heuristic evaluation
Proceedings of the third Nordic conference on Human-computer interaction
Using frustration in the design of adaptive videogames
Proceedings of the 2004 ACM SIGCHI International Conference on Advances in computer entertainment technology
A Theory of Fun for Game Design
A Theory of Fun for Game Design
Evaluation Methods for Multimodal Systems: A Comparison of Standardized Usability Questionnaires
PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
Digital Games in eLearning Environments
Simulation and Gaming
All work and no play: Measuring fun, usability, and learning in software for children
Computers & Education - Virtual learning? Selected contributions from the CAL 05 symposium
Computers & Education - Virtual learning? Selected contributions from the CAL 05 symposium
Weak inter-rater reliability in heuristic evaluation of video games
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Design for Research Results: Experimental Prototyping and Play Testing
Simulation and Gaming
Hi-index | 0.00 |
Usability testing is a key step in the successful design of new technologies and tools, ensuring that heterogeneous populations will be able to interact easily with innovative applications. While usability testing methods of productivity tools (e.g., text editors, spreadsheets, or management tools) are varied, widely available, and valuable, analyzing the usability of games, especially educational "serious" games, presents unique usability challenges. Because games are fundamentally different than general productivity tools, "traditional" usability instruments valid for productivity applications may fall short when used for serious games. In this work we present a methodology especially designed to facilitate usability testing for serious games, taking into account the specific needs of such applications and resulting in a systematically produced list of suggested improvements from large amounts of recorded gameplay data. This methodology was applied to a case study for a medical educational game, MasterMed, intended to improve patients' medication knowledge. We present the results from this methodology applied to MasterMed and a summary of the central lessons learned that are likely useful for researchers who aim to tune and improve their own serious games before releasing them for the general public.