Assessing agreement on classification tasks: the kappa statistic
Computational Linguistics
Reading level assessment using support vector machines and statistical language models
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Revisiting readability: a unified framework for predicting text quality
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
An analysis of statistical models and features for reading difficulty prediction
EANL '08 Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications
Learning to predict readability using diverse linguistic features
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Computational Linguistics
Hi-index | 0.00 |
This paper investigates two strategies for collecting readability assessments, an Expert Readers application intended to collect fine-grained readability assessments from language experts and a Sort by Readability application designed to be intuitive and open for everyone having internet access. We show that the data sets resulting from both annotation strategies are very similar. We conclude that crowdsourcing is a viable alternative to the opinions of language experts for readability prediction.