Investigations of the software testing coupling effect
ACM Transactions on Software Engineering and Methodology (TOSEM)
Grading student programs using ASSYST
SIGCSE '97 Proceedings of the twenty-eighth SIGCSE technical symposium on Computer science education
A gimmick to integrate software testing throughout the curriculum
SIGCSE '02 Proceedings of the 33rd SIGCSE technical symposium on Computer science education
Rethinking computer science education from a test-first perspective
OOPSLA '03 Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
An easy-to-use toolkit for efficient Java bytecode translators
Proceedings of the 2nd international conference on Generative programming and component engineering
Using software testing to move students from trial-and-error to reflection-in-action
Proceedings of the 35th SIGCSE technical symposium on Computer science education
MuJava: an automated class mutation system: Research Articles
Software Testing, Verification & Reliability
Test-driven learning: intrinsic integration of testing into the CS/SE curriculum
Proceedings of the 37th SIGCSE technical symposium on Computer science education
Helping students appreciate test-driven development (TDD)
Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications
Implications of integrating test-driven development into CS1/CS2 curricula
Proceedings of the 40th ACM technical symposium on Computer science education
Mutation analysis vs. code coverage in automated assessment of students' testing skills
Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion
Supporting introductory test-driven labs with WebIDE
CSEET '11 Proceedings of the 2011 24th IEEE-CS Conference on Software Engineering Education and Training
Running students' software tests against each others' code: new life for an old "gimmick"
Proceedings of the 43rd ACM technical symposium on Computer Science Education
Automated assessment of students' testing skills for improving correctness of their code
Proceedings of the 2013 companion publication for conference on Systems, programming, & applications: software for humanity
Hi-index | 0.00 |
Software testing is being added to programming courses at many schools, but current assessment techniques for evaluating student-written tests are imperfect. Code coverage measures are typically used in practice, but they have limitations and sometimes overestimate the true quality of tests. Others have proposed using mutation analysis instead, but mutation analysis poses a number of practical obstacles to classroom use. This paper describes a new approach to mutation analysis of student-written tests that is more practical for educational use, especially in an automated grading context. This approach combines several techniques to produce a novel solution that addresses the shortcomings raised by more traditional mutation analysis. An evaluation of this approach in the context of both CS1 and CS2 courses illustrates how it differs from code coverage analysis. At the same time, however, the evaluation results also raise questions of concern for CS educators regarding the relative value of more comprehensive assessment of test quality, the value of more open-ended assignments that offer significant design freedom for students, the cost of providing higher-quality reference solutions in order to support better quality assessment, and the cost of supporting assignments that require more intensive testing, such as GUI assignments.