Benefits of modularity in an automated essay scoring system

  • Authors:
  • Jill Burstein;Daniel Marcu

  • Affiliations:
  • ETS Technologies, Princeton, NJ;University of Southern California, Marina del Rey, CA

  • Venue:
  • Proceedings of the COLING-2000 Workshop on Using Toolsets and Architectures To Build NLP Systems
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

E-rater is an operational automated essay scoring application. The system combines several NLP tools that identify linguistic features in essays for the purpose of evaluating the quality of essay text. The application currently identifies a variety of syntactic, discourse, and topical analysis features. We have maintained two clear visions of e-rater's development. First, new linguistically-based features would be added to strengthen connections between human scoring guide criteria and e-rater scores. Secondly, e-rater would be adapted to automatically provide explanatory feedback about writing quality. This paper provides two examples of the flexibility of e-rater's modular architecture for continued application development toward these goals. Specifically, we discuss a) how additional features from rhetorical parse trees were integrated into e-rater, and b) how the salience of automatically generated discourse-based essay summaries was evaluated for use as instructional feedback through the re-use of e-rater's topical analysis module.