IEEE Software
SPICE: an empiricist's perspective
ISESS '95 Proceedings of the 2nd IEEE Software Engineering Standards Symposium
Interrater agreement in SPICE-based assessments: some preliminary results
ICSP '96 Proceedings of the Fourth International Conference on the Software Process (ICSP '96)
Benchmarking Kappa: Interrater Agreement in Software ProcessAssessments
Empirical Software Engineering
Hi-index | 0.00 |
An area of major investigation in the SPICE trials is the reliability of assessments. In particular, their evaluation and improvement. Previous reliability studies in the trials have focused on evaluating reliability. In this paper we report on a study that focused on generating recommendations for improving the reliability of assessments through the construction of a model. The study attempted to identify factors that have an impact on reliability. Using data from three assessments we constructed a model that explains some of the variation in the reliability of assessments. The two factors considered were the capability of processes and when ratings are made during an assessment (other factors such as assessor experience were held constant). Our model suggests that future assessment processes should have two contiguous phases in order to increase reliability. The first phase focuses only on data collection. During the second phase, ratings are made.