Automating Model Building in c-rater

  • Authors:
  • Jana Z. Sukkarieh;Svetlana Stoyanchev

  • Affiliations:
  • Educational Testing Service, Princeton, NJ;Stony Brook University, Stony Brook, NY

  • Venue:
  • TextInfer '09 Proceedings of the 2009 Workshop on Applied Textual Inference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

c-rater is Educational Testing Service's technology for the content scoring of short student responses. A major step in the scoring process is Model Building where variants of model answers are generated that correspond to the rubric for each item or test question. Until recently, Model Building was knowledge-engineered (KE) and hence labor and time intensive. In this paper, we describe our approach to automating Model Building in c-rater. We show that c-rater achieves comparable accuracy on automatically built and KE models.