Evaluation of text coherence for electronic essay scoring systems
Natural Language Engineering
A centering approach to pronouns
ACL '87 Proceedings of the 25th annual meeting on Association for Computational Linguistics
Improving machine learning approaches to coreference resolution
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
The role of centering theory's rough-shift in the teaching and evaluation of writing skills
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Optimizing Referential Coherence in Text Generation
Computational Linguistics
Intricacies of Collins' Parsing Model
Computational Linguistics
Centering: A Parametric Theory and Its Instantiations
Computational Linguistics
Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Modeling local coherence: An entity-based approach
Computational Linguistics
Joint unsupervised coreference resolution with Markov logic
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
From local to global coherence: a bottom-up approach to text planning
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Hi-index | 0.00 |
This paper introduces CTutor, an automated writing evaluation (AWE) tool for detecting breakdowns in local coherence and reports on a study that applies it to the writing of Chinese L2 English learners. The program is based on Centering theory (CT), a theory of local coherence and salience. The principles of CT are first introduced and then the design and function of CTutor are described. The effectiveness and reliability of the program was evaluated in a study that compared performance by CTutor and two human raters on the analysis of local incoherence and provision of revision on learner essays. Intermediate Chinese English as a foreign language learners (n = 52) were divided into two groups: one receiving CTutor feedback and the other receiving feedback from human raters. Learners in both groups completed three essays; each of which involved the submission of a first draft, revision with feedback on local coherence quality and re-submission. Our results from the comparison between CTutor and human experts showed that this software tool is able to detect local coherence breakdowns with moderate accuracy (F1-measure is around 0.4). There was also little difference between participants' responses to CTutor feedback and human feedback in terms of revision behaviour, with both feedback modes resulting in similar revision pattern. Potential use of the program in instructional settings is discussed. © 2012 Wiley Periodicals, Inc.