Dealing with open-answer questions in a peer-assessment environment

  • Authors:
  • Andrea Sterbini;Marco Temperini

  • Affiliations:
  • Computer Science, Sapienza University of Rome, Rome, Italy;Dept. Computer, Control, and Management Engineering, Sapienza University of Rome, Rome, Italy

  • Venue:
  • ICWL'12 Proceedings of the 11th international conference on Advances in Web-Based Learning
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Correction of open answers questions is an heavy task as, in principle, all the students answers have to be graded. In this paper we give evidence of the possibility to reduce the teacher's workload on open questions questionnaires, by a module managing a rough constraint-based model of the students' decisions, involved in a peer-assessment task. By modeling students decisions we relate their competences on the topic (K) to their ability to judge (J) others' work and to the correctness (C) of their own (open) answer. The network of constraints and relations established among the above variables through the students' choices, allows us to constraint the set of possible values of the answers' correctness (C). Our system suggests what subset of the answers the teacher should correct, in order to narrow the set of hypotheses and produce a complete set of grades. The model is quite simple, yet sufficient to show that the number of required corrections is as small as half of the initial answers. In order to show this result, we report on an extensive set of simulated experiments which answer to three research questions: 1) is the method described able to deduce the whole set of grades with few corrections? 2) what set of parameters is best to run actual experiments? 3) is the model "robust" respect to simulations with high probability of random data?