The state of the art in automating usability evaluation of user interfaces
ACM Computing Surveys (CSUR)
The use of guidelines to automatically verify Web accessibility
Universal Access in the Information Society
Where web engineering tool support ends: building usable websites
Proceedings of the 2005 ACM symposium on Applied computing
Automated Web Site Evaluation: Researchers' and Practitioners' Perspectives
Automated Web Site Evaluation: Researchers' and Practitioners' Perspectives
Building up usability-engineering capability by improving access to automated usability evaluation
Interacting with Computers
Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result?
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
Detection of layout-purpose TABLE tags based on machine learning
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: applications and services
DHTML accessibility checking based on static JavaScript analysis
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: applications and services
Easing web guidelines specification
ICWE'07 Proceedings of the 7th international conference on Web engineering
Automatic evaluation of mobile web accessibility
ERCIM'06 Proceedings of the 9th conference on User interfaces for all
HCSE'12 Proceedings of the 4th international conference on Human-Centered Software Engineering
The design of RIA accessibility evaluation tool
Advances in Engineering Software
Hi-index | 0.00 |
A system for automatically evaluating the usability and accessibility of web sites by checking their HTML code against guidelines has been developed. All usability and accessibility guidelines are formally expressed in a XML-compliant specification language called Guideline Definition Language (GDL) so as to separate the evaluation engine from the evaluation logics (the guidelines). This separation enables managing guidelines (i.e., create, retrieve, update, and delete) without affecting the code of the evaluation engine. The evaluation engine is coupled to a reporting system that automatically generates one or many evaluation reports in a flexible way: adaptation for screen reading or for a printed report, sorting by page, by object, by guideline, by priority, or by severity of the detected problems. This paper focuses on the reporting system.