Software testing processes as a linear dynamic system
Information Sciences: an International Journal
Detection of consistent patterns from process enactment data
ICSP'08 Proceedings of the Software process, 2008 international conference on Making globally distributed software development a success story
An empirical validation of a web fault taxonomy and its usage for web testing
Journal of Web Engineering
Properties of stereotypes from the perspective of their role in designs
MoDELS'05 Proceedings of the 8th international conference on Model Driven Engineering Languages and Systems
Component testing is not enough: a study of software faults in telecom middleware
TestCom'07/FATES'07 Proceedings of the 19th IFIP TC6/WG6.1 international conference, and 7th international conference on Testing of Software and Communicating Systems
Journal of Systems and Software
Hi-index | 0.00 |
Inter-rater agreement is a well-known challenge and is a key issue when discussing fault classification. Fault classification is, by nature, a subjective task since it highly depends on the people performing the classification. Measures are required to hinder the subjective nature of fault classification to propagate through the fault classification process and onto subsequent activities using the classified faults, for example process improvement. One approach to prevent the subjective nature of fault classification is to use multiple raters and measure inter-rater agreement. In this paper, we evaluate the possibility to have an independent group of people classifying faults. The objective is to evaluate whether such a group could be used in a process improvement initiative. An empirical study is conducted with eight persons classifying 30 faults independently. The study concludes that the provided material were unsatisfactory to obtain inter-rater agreement.