Detecting errors in English article usage by non-native speakers
Natural Language Engineering
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
The ups and downs of preposition error detection in ESL writing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Native judgments of non-native usage: experiments in preposition error detection
HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Creating a manually error-tagged and shallow-parsed learner corpus
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
They can help: using crowdsourcing to improve the evaluation of grammatical error detection systems
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
Technology-mediated contributions: editing behaviors among new wikipedians
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Profanity use in online communities
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Automatic identification of personal insults on social news sites
Journal of the American Society for Information Science and Technology
Developing learner corpus annotation for Korean particle errors
LAW VI '12 Proceedings of the Sixth Linguistic Annotation Workshop
Hi-index | 0.00 |
In this paper we present results from two pilot studies which show that using the Amazon Mechanical Turk for preposition error annotation is as effective as using trained raters, but at a fraction of the time and cost. Based on these results, we propose a new evaluation method which makes it feasible to compare two error detection systems tested on different learner data sets.