A Novel Technique for Automated Linguistic Quality Assessment of Students' Essays Using Automatic Summarizers

  • Authors:
  • Seemab Latif;Mary McGee Wood

  • Affiliations:
  • -;-

  • Venue:
  • CSIE '09 Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering - Volume 05
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, experiments have addressed the calculation of inter-annotator inconsistency in selecting the content in both manual and automatic summarization of sample TOEFL essays. A new finding is that the linguistic quality of source essay has a very strong correlation with the degree of disagreement among human assessors to what should be included in a summary. This leads to a fully automated essay evaluation technique based on degree of disagreement among automated summarizes. ROUGE evaluation is used to measure the degree of inconsistency among the participants (human summarizers and automatic summarizers). This automated essay evaluation technique is potentially an important contribution with wider significance.