Metric for web accessibility evaluation
Journal of the American Society for Information Science and Technology
Quantitative metrics for measuring web accessibility
W4A '07 Proceedings of the 2007 international cross-disciplinary conference on Web accessibility (W4A)
SAMBA: a semi-automatic method for measuring barriers of accessibility
Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility
AChecker: open, interactive, customizable, web accessibility checking
Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A)
Trust network inference for online rating data using generative models
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Monitoring accessibility: large scale evaluations at a Geo political level
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Integrating manual and automatic evaluations to measure accessibility barriers
ICCHP'12 Proceedings of the 13th international conference on Computers Helping People with Special Needs - Volume Part I
Three web accessibility evaluation perspectives for RIA
Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility
Hi-index | 0.00 |
Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range. In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.