How to rate programming skills in programming experiments?: a preliminary, exploratory, study based on university marks, pretests, and self-estimation

  • Authors:
  • Sebastian Kleinschmager;Stefan Hanenberg

  • Affiliations:
  • University of Duisburg-Essen, Essen, Germany;University of Duisburg-Essen, Essen, Germany

  • Venue:
  • Proceedings of the 3rd ACM SIGPLAN workshop on Evaluation and usability of programming languages and tools
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Rating of subjects is an important issue for empirical studies. First, it is desirable for studies that rely on comparisons between different groups to make sure that those groups are balanced, i.e. that subjects in different groups are comparable. Second, in order to understand to what extent the results of a study are generalizable it is necessary to understand whether the used subjects can be considered as representative. Third, for a deeper understanding of an experiment's results it is desirable to understand what different kinds of subjects achieved what results. This paper addresses this topic by a preliminary, exploratory study that analyzes three different possible criteria: university marks, self-estimation, and pretests. It turns out that neither university marks nor pretests yielded better results than self-estimation.