The Basics of Item Response Theory
The Basics of Item Response Theory
Structure, scoring and purpose of computing competitions
Informatics in education
Gender and programming contests: mitigating exclusionary practices
Informatics in education
On the suitability of programming tasks for automated evaluation
Informatics in education
The IOI is (not) a science olympiad
Informatics in education
The IOI is (not) a science olympiad
Informatics in education
Hi-index | 0.00 |
We examine the precision with which the cumulative score from a suite of test cases ranks participants in the International Olympiad in Informatics (IOI). Our concern is the ability of these scores to reflect achievement at all levels, as opposed to reflecting chance or arbitrary factors involved in composing the test suite. Test cases are assumed to be drawn from an infinite population of similar cases; variance in standardized rank is estimated by the bootstrap method and used to compute confidence intervals which contain the hypothetical true ranking with 95% probability. We examine the relative contribution of easy (so-called fifty-percent rule) cases and hard cases to the overall ranking. Empirical results based on IOI 2005 suggest that easy and hard cases are both material to the ranking, but the proportion of each is unimportant.