Assessing creative problem-solving with automated text grading

  • Authors:
  • Hao-Chuan Wang;Chun-Yen Chang;Tsai-Yen Li

  • Affiliations:
  • Science Education Center, National Taiwan Normal University, Taipei, Taiwan and School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, United States;Science Education Center, National Taiwan Normal University, Taipei, Taiwan and Department of Earth Sciences, National Taiwan Normal University, Taipei, Taiwan;Department of Computer Science, National Chengchi University, Taipei, Taiwan

  • Venue:
  • Computers & Education
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The work aims to improve the assessment of creative problem-solving in science education by employing language technologies and computational-statistical machine learning methods to grade students' natural language responses automatically. To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit students' constructed responses are beneficial. But the high cost required in manually grading constructed responses could become an obstacle in applying open-ended questions. In this study, automated grading schemes have been developed and evaluated in the context of secondary Earth science education. Empirical evaluations revealed that the automated grading schemes may reliably identify domain concepts embedded in students' natural language responses with satisfactory inter-coder agreement against human coding in two sub-tasks of the test (Cohen's Kappa=.65-.72). And when a single holistic score was computed for each student, machine-generated scores achieved high inter-rater reliability against human grading (Pearson's r=.92). The reliable performance in automatic concept identification and numeric grading demonstrates the potential of using automated grading to support the use of open-ended questions in science assessments and enable new technologies for science learning.