Maximum entropy models for natural language ambiguity resolution
Maximum entropy models for natural language ambiguity resolution
An unsupervised method for detecting grammatical errors
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Minimizing manual annotation cost in supervised training from corpora
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
Automatic error detection in the Japanese learners' English spoken data
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2
Detecting errors in English article usage by non-native speakers
Natural Language Engineering
A feedback-augmented method for detecting errors in the writing of learners of English
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
The ups and downs of preposition error detection in ESL writing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Detection of grammatical errors involving prepositions
SigSem '07 Proceedings of the Fourth ACL-SIGSEM Workshop on Prepositions
Automatically acquiring models of preposition use
SigSem '07 Proceedings of the Fourth ACL-SIGSEM Workshop on Prepositions
Prepositions in applications: A survey and introduction to the special issue
Computational Linguistics
The ups and downs of preposition error detection in ESL writing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
User input and interactions on Microsoft Research ESL Assistant
EdAppsNLP '09 Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications
ACL-IJCNLP '09 Proceedings of the Third Linguistic Annotation Workshop
Annotating language errors in texts: investigating argumentation and decision schemas
ACL-IJCNLP '09 Proceedings of the Third Linguistic Annotation Workshop
Training paradigms for correcting errors in grammar and usage
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Using mostly native data to correct errors in learners' writing: a meta-classifier approach
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Using parse features for preposition selection and error detection
ACLShort '10 Proceedings of the ACL 2010 Conference Short Papers
Annotating ESL errors: challenges and rewards
IUNLPBEA '10 Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications
Rethinking grammatical error annotation and evaluation with the Amazon Mechanical Turk
IUNLPBEA '10 Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications
Exploring the data-driven prediction of prepositions in English
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
On Morphological Analysis for Learner Language, Focusing on Russian
Research on Language and Computation
Data-driven correction of function words in non-native English
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Informing determiner and preposition error correction with word clusters
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP
Evaluating and automating the annotation of a learner corpus
Language Resources and Evaluation
Hi-index | 0.00 |
Evaluation and annotation are two of the greatest challenges in developing NLP instructional or diagnostic tools to mark grammar and usage errors in the writing of non-native speakers. Past approaches have commonly used only one rater to annotate a corpus of learner errors to compare to system output. In this paper, we show how using only one rater can skew system evaluation and then we present a sampling approach that makes it possible to evaluate a system more efficiently.