Usability testing vs. heuristic evaluation: was there a contest?
ACM SIGCHI Bulletin
Human-computer interaction
A comparison of usability techniques for evaluating design
DIS '97 Proceedings of the 2nd conference on Designing interactive systems: processes, practices, methods, and techniques
Usability Engineering
An approach to usability evaluation of e-learning applications
Universal Access in the Information Society
SUE inspection: an effective method for systematic usability evaluation of hypermedia
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
On the importance of the user interface for e-learning systems quality
IASTED-HCI '07 Proceedings of the Second IASTED International Conference on Human Computer Interaction
A holistic approach to the evaluation of e-learning systems
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: applications and services
Training software developers in usability engineering: a literature review
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries
Do patterns help novice evaluators? A comparative study
International Journal of Human-Computer Studies
The effect of system usability and multitasking activities in distance learning
Proceedings of the 9th ACM SIGCHI Italian Chapter International Conference on Computer-Human Interaction: Facing Complexity
Proceedings of the 24th Australian Computer-Human Interaction Conference
Hi-index | 0.01 |
The evaluation of e-learning applications deserves special attention and evaluators need effective methodologies and appropriate guidelines to perform their task. We have proposed a methodology, called eLSE (e-Learning Systematic Evaluation), which combines a specific inspection technique with user-testing. This inspection aims at allowing inspectors that may not have a wide experience in evaluating e-learning systems to perform accurate evaluations. It is based on the use of evaluation patterns, called Abstract Tasks (ATs), which precisely describe the activities to be performed during inspection. For this reason, it is called AT inspection. In this paper, we present an empirical validation of the AT inspection technique: three groups of novice inspectors evaluated a commercial e-learning system applying the AT inspection, the heuristic inspection, or user-testing. Results have shown an advantage of the AT inspection over the other two usability evaluation methods, demonstrating that Abstract Tasks are effective and efficient tools to drive evaluators and improve their performance. Important methodological considerations on the reliability of usability evaluation techniques are discussed.