Better student assessing by finding difficulty factors in a fully automated comprehension measure

  • Authors:
  • Brooke Soden Hensler;Joseph Beck

  • Affiliations:
  • Robotics Institute, Carnegie Mellon University, Pittsburgh, PA;Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • ITS'06 Proceedings of the 8th international conference on Intelligent Tutoring Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The multiple choice cloze (MCC) question format is commonly used to assess students' comprehension. It is an especially useful format for ITS because it is fully automatable and can be used on any text. Unfortunately, very little is known about the factors that influence MCC question difficulty and student performance on such questions. In order to better understand student performance on MCC questions, we developed a model of MCC questions. Our model shows that the difficulty of the answer and the student's response time are the most important predictors of student performance. In addition to showing the relative impact of the terms in our model, our model provides evidence of a developmental trend in syntactic awareness beginning around the 2nd grade. Our model also accounts for 10% more variance in students' external test scores compared to the standard scoring method for MCC questions.