Predicting buggy changes inside an integrated development environment
Proceedings of the 2007 OOPSLA workshop on eclipse technology eXchange
Assessing work for static software bug detection
Proceedings of the 1st ACM international workshop on Empirical assessment of software engineering languages and technologies: held in conjunction with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE) 2007
Empirical Evaluation of Hunk Metrics as Bug Predictors
IWSM '09 /Mensura '09 Proceedings of the International Conferences on Software Process and Product Measurement
SUDS: an infrastructure for creating dynamic software defect detection tools
Automated Software Engineering
Program slicing-based cohesion measurement: the challenges of replicating studies using metrics
Proceedings of the 2nd International Workshop on Emerging Trends in Software Metrics
Analysis of bug fixing processes using program slicing metrics
PROFES'10 Proceedings of the 11th international conference on Product-Focused Software Process Improvement
Hi-index | 0.00 |
In this paper, we introduce 13 program slicing metrics for C language programs. These metrics use program slice information to measure the size, complexity, coupling, and cohesion properties of programs. Compared with traditional code metrics based on code statements or code structure, program slicing metrics involve measures for program behaviors. To evaluate the program slicing metrics, we compare them with the Understand for C++ suite of metrics, a set of widely-used traditional code metrics, in a series of bug classification experiments. We used the program slicing and the Understand for C++ metrics computed for 887 revisions of the Apache HTTP project and 76 revisions of the Latex2rtf project to classify source code files or functions as either buggy or bug-free. We then compared their classification prediction accuracy. Program slicing metrics have slightly better performance than the Understand for C++ metrics in classifying buggy/bug-free source code. Program slicing metrics have an overall 82.6% (Apache) and 92% (Latex2rtf) accuracy at the file level, better than the Understand for C++ metrics with an overall 80.4% (Apache) and 88% (Latex2rtf) accuracy. The experiments illustrate that the program slicing metrics have at least the same bug classification performance as the Understand for C++ metrics.