Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Modeling Discriminative Global Inference
ICSC '07 Proceedings of the International Conference on Semantic Computing
A classifier-based approach to preposition and determiner error correction in L2 English
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
The ups and downs of preposition error detection in ESL writing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
A demonstration of human computation using the Phrase Detectives annotation game
Proceedings of the ACM SIGKDD Workshop on Human Computation
Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Automated Grammatical Error Detection for Language Learners
Automated Grammatical Error Detection for Language Learners
Using mostly native data to correct errors in learners' writing: a meta-classifier approach
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Cheap, fast and good enough: automatic speech recognition with non-expert transcription
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Predicting human-targeted translation edit rate via untrained human annotators
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Using Amazon Mechanical Turk for transcription of non-native speech
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Annotating named entities in Twitter data with crowdsourcing
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Using Mechanical Turk to annotate lexicons for less commonly used languages
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Amazon Mechanical Turk for subjectivity word sense disambiguation
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
IUNLPBEA '10 Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications
Annotating ESL errors: challenges and rewards
IUNLPBEA '10 Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications
Rethinking grammatical error annotation and evaluation with the Amazon Mechanical Turk
IUNLPBEA '10 Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications
Generating confusion sets for context-sensitive error correction
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Helping our own: text massaging for computational linguistics as a new shared task
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
The circle of meaning: from translation to paraphrasing and back
The circle of meaning: from translation to paraphrasing and back
Re-examining machine translation metrics for paraphrase identification
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Hi-index | 0.00 |
Despite the rising interest in developing grammatical error detection systems for non-native speakers of English, progress in the field has been hampered by a lack of informative metrics and an inability to directly compare the performance of systems developed by different researchers. In this paper we address these problems by presenting two evaluation methodologies, both based on a novel use of crowdsourcing.