An unsupervised method for detecting grammatical errors
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Detecting errors in English article usage by non-native speakers
Natural Language Engineering
The ups and downs of preposition error detection in ESL writing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Detection of grammatical errors involving prepositions
SigSem '07 Proceedings of the Fourth ACL-SIGSEM Workshop on Prepositions
Web-scale N-gram models for lexical disambiguation
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Using mostly native data to correct errors in learners' writing: a meta-classifier approach
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
HOO 2012: a report on the preposition and determiner error correction shared task
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP
Hi-index | 0.00 |
Some grammatical error detection methods, including the ones currently used by the Educational Testing Service's e-rater system (Attali and Burstein, 2006), are tuned for precision because of the perceived high cost of false positives (i.e., marking fluent English as ungrammatical). Precision, however, is not optimal for all tasks, particularly the HOO 2012 Shared Task on grammatical errors, which uses F-score for evaluation. In this paper, we extend e-rater's preposition and determiner error detection modules with a large-scale n-gram method (Bergsma et al., 2009) that complements the existing rule-based and classifier-based methods. On the HOO 2012 Shared Task, the hybrid method performed better than its component methods in terms of F-score, and it was competitive with submissions from other HOO 2012 participants.