Finding usability problems through heuristic evaluation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The evaluator effect in usability tests
CHI 98 Cconference Summary on Human Factors in Computing Systems
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Usability Engineering
Maximum Accessibility: Making Your Web Site More Usable for Everyone
Maximum Accessibility: Making Your Web Site More Usable for Everyone
The Evaluator Effect during First-Time Use of the Cognitive Walkthrough Technique
Proceedings of HCI International (the 8th International Conference on Human-Computer Interaction) on Human-Computer Interaction: Ergonomics and User Interfaces-Volume I - Volume I
Comparing accessibility evaluation tools: a method for tool effectiveness
Universal Access in the Information Society
Web Accessibility: Web Standards and Regulatory Compliance
Web Accessibility: Web Standards and Regulatory Compliance
The relationship between accessibility and usability of websites
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Handbook of Usability TestingXXX: Howto Plan, Design, and Conduct Effective Tests
Handbook of Usability TestingXXX: Howto Plan, Design, and Conduct Effective Tests
Beyond Conformance: The Role of Accessibility Evaluation Methods
WISE '08 Proceedings of the 2008 international workshops on Web Information Systems Engineering
International Journal of Human-Computer Studies
How much does expertise matter?: a barrier walkthrough study with experts and non-experts
Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
On the testability of WCAG 2.0 for beginners
Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A)
Automatic web accessibility metrics: Where we are and where we can go
Interacting with Computers
Development and trial of an educational tool to support the accessibility evaluation process
Proceedings of the International Cross-Disciplinary Conference on Web Accessibility
Developing Hera-FFX for WCAG 2.0
Proceedings of the International Cross-Disciplinary Conference on Web Accessibility
ACM SIGACCESS Accessibility and Computing
A tool to support the web accessibility evaluation process for novices
Proceedings of the 16th annual joint conference on Innovation and technology in computer science education
Is accessibility conformance an elusive property? A study of validity and reliability of WCAG 2.0
ACM Transactions on Accessible Computing (TACCESS)
Using acceptance tests to validate accessibility requirements in RIA
Proceedings of the International Cross-Disciplinary Conference on Web Accessibility
Understanding web accessibility and its drivers
Proceedings of the International Cross-Disciplinary Conference on Web Accessibility
Evaluation of the effectiveness of a tool to support novice auditors
Proceedings of the International Cross-Disciplinary Conference on Web Accessibility
Guidelines are only half of the story: accessibility problems encountered by blind users on the web
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
Web Content Accessibility Guidelines 2.0 (WCAG 2.0) require that success criteria be tested by human inspection. Further, testability of WCAG 2.0 criteria is achieved if 80% of knowledgeable inspectors agree that the criteria has been met or not. In this paper we investigate the very core WCAG 2.0, being their ability to determine web content accessibility conformance. We conducted an empirical study to ascertain the testability of WCAG 2.0 success criteria when experts and non-experts evaluated four relatively complex web pages; and the differences between the two. Further, we discuss the validity of the evaluations generated by these inspectors and look at the differences in validity due to expertise. In summary, our study, comprising 22 experts and 27 non-experts, shows that approximately 50% of success criteria fail to meet the 80% agreement threshold; experts produce 20% false positives and miss 32% of the true problems. We also compared the performance of experts against that of non-experts and found that agreement for the non-experts dropped by 6%, false positives reach 42% and false negatives 49%. This suggests that in many cases WCAG 2.0 conformance cannot be tested by human inspection to a level where it is believed that at least 80% of knowledgeable human evaluators would agree on the conclusion. Why experts fail to meet the 80% threshold and what can be done to help achieve this level are the subjects of further investigation.