A method of automatic grade calibration in peer assessment

  • Authors:
  • John Hamer;Kenneth T. K. Ma;Hugh H. F. Kwong

  • Affiliations:
  • University of Auckland, Auckland, New Zealand;University of Auckland, Auckland, New Zealand;University of Auckland, Auckland, New Zealand

  • Venue:
  • ACE '05 Proceedings of the 7th Australasian conference on Computing education - Volume 42
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Once the exclusive preserve of small graduate courses, peer assessment is being rediscovered as an effective and efficient learning tool in large undergraduate classes, a transition made possible through the use of electronic assignment submissions and web-based support software.Asking large numbers of undergraduates to grade each others work raises a number of obvious concerns. How will mark reliability and validity be maintained? Can plagiarism be detected or prevented? What effect will "rogue" reviewers have on the integrity of the process? Will effective learning actually occur?In this paper we address the issue of grade reliability, and present a novel technique for identifying and minimising the impact of "rogues." Simulations suggest the method is effective under a wide range of conditions.