Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
Evaluating programming ability in an introductory computer science course
Proceedings of the thirty-first SIGCSE technical symposium on Computer science education
Testing skills and knowledge: introducing a laboratory exam in CS1
SIGCSE '02 Proceedings of the 33rd SIGCSE technical symposium on Computer science education
Design guidelines for the lab component of objects-first CS1
SIGCSE '02 Proceedings of the 33rd SIGCSE technical symposium on Computer science education
Management challenges in a large introductory computer science course
SIGCSE '02 Proceedings of the 33rd SIGCSE technical symposium on Computer science education
Effectiveness of online assessment
SIGCSE '03 Proceedings of the 34th SIGCSE technical symposium on Computer science education
Evolution of an introductory computer science course: the long haul
Journal of Computing Sciences in Colleges
Interactive program demonstration as a form of student program assessment
Journal of Computing Sciences in Colleges
The effect of closed labs in computer science I: an assessment
Journal of Computing Sciences in Colleges
Assessing the assessment of programming ability
Proceedings of the 35th SIGCSE technical symposium on Computer science education
Closed labs in computer science I revisited in the context of online testing
Proceedings of the 41st ACM technical symposium on Computer science education
Using a student response system in CS1 and CS2
Proceedings of the 42nd ACM technical symposium on Computer science education
Hi-index | 0.00 |
One of the largest challenges facing educators teaching courses with a significant programming component is deciding how to evaluate each student's programming ability. In this paper we discuss how we have addressed this challenge in an introductory computer science course and statistically analyze the results to examine potential inequities in our approach.