Heuristic evaluation of user interfaces
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
Cognitive walkthroughs: a method for theory-based evaluation of user interfaces
International Journal of Man-Machine Studies
Finding usability problems through heuristic evaluation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Visualization ability as a predictor of user learning success
International Journal of Man-Machine Studies
The pluralistic usability walkthrough: coordinated empathies
Usability inspection methods
Evaluating a user interface with ergonomic criteria
International Journal of Human-Computer Interaction
Levels and types of mediation in instructional systems: an individual differences approach
International Journal of Human-Computer Studies
User analysis in HCI—the historical lessons from individual differences research
International Journal of Human-Computer Studies
A toolkit for strategic usability: results from workshops, panels, and surveys
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Users' interaction with World Wide Web resources: an exploratory study using a holistic approach
Information Processing and Management: an International Journal
Cognitive styles and hypermedia navigation: development of a learning model
Journal of the American Society for Information Science and Technology
Do cognitive styles affect learning performance in different computer media?
Proceedings of the 7th annual conference on Innovation and technology in computer science education
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests
On the reliability of usability testing
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Applying user testing data to UEM performance metrics
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Accommodating field-dependence: a cross-over study
Proceedings of the 9th annual SIGCSE conference on Innovation and technology in computer science education
Journal of the American Society for Information Science and Technology
International Journal of Human-Computer Studies
How much does expertise matter?: a barrier walkthrough study with experts and non-experts
Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
The usability inspection performance of work-domain experts: An empirical study
Interacting with Computers
Testability and validity of WCAG 2.0: the expertise effect
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Do patterns help novice evaluators? A comparative study
International Journal of Human-Computer Studies
Is accessibility conformance an elusive property? A study of validity and reliability of WCAG 2.0
ACM Transactions on Accessible Computing (TACCESS)
Exploring playability of social network games
ACE'12 Proceedings of the 9th international conference on Advances in Computer Entertainment
International Journal of Human-Computer Studies
More testers - The effect of crowd size and time restriction in software testing
Information and Software Technology
A new proposal for improving heuristic evaluation reports performed by novice evaluators
Proceedings of the 2013 Chilean Conference on Human - Computer Interaction
Hi-index | 0.00 |
Heuristic evaluation is a widely used usability evaluation method [Rosenbaum et al., 2000. A toolkit for strategic usability: results from workshops, panels, and surveys. In: Little, R., Nigay, L. (Eds.), In: Proceedings of ACM CHI 2000 Conference, New York, pp. 337-344]. But it suffers from large variability in the evaluation results due to differences among evaluators [Nielsen, 1993. Usability Engineering. Academic Press, Boston, MA]. The evaluation performance of evaluators with two types of cognitive styles-ten field independent (FI) subjects and ten field dependent (FD) subjects were compared. The results indicated that the FI subjects produced evaluation results with significantly higher thoroughness (t"1"8=3.49, p=0.0026), validity (t"1"8=4.26, p=0.0005), effectiveness (t"1"8=5.14, p=0.0001), and sensitivity (t"1"8=3.16, p=0.005) than the FD subjects. When assessing their own evaluation experiences, the FI subjects felt it was easier to find usability problems than the FD subjects (t"1"8=2.10, p=0.049), but the FD subjects felt more guided during the evaluation than the FI subjects (t"1"8=2.28, p=0.035).