Peer and self assessment in massive online classes

  • Authors:
  • Chinmay Kulkarni;Koh Pang Wei;Huy Le;Daniel Chia;Kathryn Papadopoulos;Justin Cheng;Daphne Koller;Scott R. Klemmer

  • Affiliations:
  • Stanford University, Stanford, CA;Stanford University, and Coursera, Inc., Stanford, CA;Coursera, Inc., Mountain View, CA;Stanford University, and Coursera, Inc., Stanford, CA;Stanford University, Stanford, CA;Stanford University, Stanford, CA;Stanford University, and Coursera, Inc., Stanford, CA;Stanford University, San Diego

  • Venue:
  • ACM Transactions on Computer-Human Interaction (TOCHI)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.