A Validation of Object-Oriented Design Metrics as Quality Indicators
IEEE Transactions on Software Engineering
Predicting Fault-Prone Software Modules in Telephone Switches
IEEE Transactions on Software Engineering
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
A Unified Framework for Coupling Measurement in Object-Oriented Systems
IEEE Transactions on Software Engineering
Predicting Fault Incidence Using Software Change History
IEEE Transactions on Software Engineering
The prediction of faulty classes using object-oriented design metrics
Journal of Systems and Software
Software Engineering Economics
Software Engineering Economics
On Building Prediction Systems for Software Engineers
Empirical Software Engineering
A Metrics Suite for Object Oriented Design
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering
ICSE '76 Proceedings of the 2nd international conference on Software engineering
Use of relative code churn measures to predict system defect density
Proceedings of the 27th international conference on Software engineering
Empirical Validation of Object-Oriented Metrics on Open Source Software for Fault Prediction
IEEE Transactions on Software Engineering
Software Defect Association Mining and Defect Correction Effort Prediction
IEEE Transactions on Software Engineering
Optimal Project Feature Weights in Analogy-Based Cost Estimation: Improvement and Limitations
IEEE Transactions on Software Engineering
Mining metrics to predict component failures
Proceedings of the 28th international conference on Software engineering
Predicting fault-prone components in a java legacy system
Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering
How Long Will It Take to Fix This Bug?
MSR '07 Proceedings of the Fourth International Workshop on Mining Software Repositories
Predicting Defects for Eclipse
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Proceedings of the 30th international conference on Software engineering
Future of Mining Software Archives: A Roundtable
IEEE Software
On modeling software defect repair time
Empirical Software Engineering
Revisiting the evaluation of defect prediction models
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Predicting faults using the complexity of code changes
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
Does calling structure information improve the accuracy of fault prediction?
MSR '09 Proceedings of the 2009 6th IEEE International Working Conference on Mining Software Repositories
Journal of Systems and Software
Predicting the fix time of bugs
Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering
Stable rankings for different effort models
Automated Software Engineering
Defect prediction from static code features: current results, limitations, new approaches
Automated Software Engineering
Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
Revisiting common bug prediction findings using effort-aware models
ICSM '10 Proceedings of the 2010 IEEE International Conference on Software Maintenance
Hi-index | 0.00 |
Context: Effort-aware models, e.g., effort-aware bug prediction models aim to help practitioners identify and prioritize buggy software locations according to the effort involved with fixing the bugs. Since the effort of current bugs is not yet known and the effort of past bugs is typically not explicitly recorded, effort-aware bug prediction models are forced to use approximations, such as the number of lines of code (LOC) of the predicted files. Objective: Although the choice of these approximations is critical for the performance of the prediction models, there is no empirical evidence on whether LOC is actually a good approximation. Therefore, in this paper, we investigate the question: is LOC a good measure of effort for use in effort-aware models? Method: We perform an empirical study on four open source projects, for which we obtain explicitly-recorded effort data, and compare the use of LOC to various complexity, size and churn metrics as measures of effort. Results: We find that using a combination of complexity, size and churn metrics are a better measure of effort than using LOC alone. Furthermore, we examine the impact of our findings on previous effort-aware bug prediction work and find that using LOC as a measure for effort does not significantly affect the list of files being flagged, however, using LOC under-estimates the amount of effort required compared to our best effort predictor by approximately 66%. Conclusion: Studies using effort-aware models should not assume that LOC is a good measure of effort. For the case of effort-aware bug prediction, using LOC provides results that are similar to combining complexity, churn, size and LOC as a proxy for effort when prioritizing the most risky files. However, we find that for the purpose of effort-estimation, using LOC may under-estimate the amount of effort required.