The Debate on Automated Essay Grading
IEEE Intelligent Systems
An English grammar checker as a writing aid for students of English as a second language
ANLC '97 Proceedings of the fifth conference on Applied natural language processing: Descriptions of system demonstrations and videos
An unsupervised method for detecting grammatical errors
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Coping with extragrammaticality
ACL '84 Proceedings of the 10th International Conference on Computational Linguistics and 22nd annual meeting on Association for Computational Linguistics
Recognizing syntactic errors in the writing of second language learners
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
The role of centering theory's rough-shift in the teaching and evaluation of writing skills
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Automated essay scoring for nonnative English speakers
ASSESSEVALNLP '99 Proceedings of a Symposium on Computer Mediated Language Assessment and Evaluation in Natural Language Processing
Correcting ESL errors using phrasal SMT techniques
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
A framework for the computerized assessment of university student essays
Computers in Human Behavior
A new dataset and method for automatically grading ESOL texts
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
To date, traditional NLP parsers have not been widely successful in TESOL-oriented applications, particularly in scoring written compositions. Re-engineering such applications to provide the necessary robustness for handling ungrammatical English has proven a formidable obstacle. We discuss the use of a non-traditional parser for rating compositions that attenuates some of these difficulties. Its dependency-based shallow parsing approach provides significant robustness in the face of language learners' ungrammatical compositions. This paper discusses how a corpus of L2 essays for English was rated using the parser, and how the automatic evaulations compared to those obtained by manual methods. The types of modifications that were made to the system are discussed. Limitations to the current system are described, future plans for developing the system are sketched, and further applications beyond English essay rating are mentioned.