The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics
IEEE Transactions on Software Engineering
The distribution of faults in a large industrial software system
ISSTA '02 Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis
Assessing the applicability of fault-proneness models across object-oriented software projects
IEEE Transactions on Software Engineering
Hipikat: recommending pertinent software development artifacts
Proceedings of the 25th International Conference on Software Engineering
Populating a Release History Database from Version Control and Bug Tracking Systems
ICSM '03 Proceedings of the International Conference on Software Maintenance
Identifying and characterizing change-prone classes in two large-scale open-source products
Journal of Systems and Software
Data Mining Static Code Attributes to Learn Defect Predictors
IEEE Transactions on Software Engineering
Predicting Faults from Cached History
ICSE '07 Proceedings of the 29th international conference on Software Engineering
Adequate and Precise Evaluation of Quality Models in Software Engineering Studies
ICSEW '07 Proceedings of the 29th International Conference on Software Engineering Workshops
Predicting Defects for Eclipse
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Data Mining Techniques for Building Fault-proneness Models in Telecom Java Software
ISSRE '07 Proceedings of the The 18th IEEE International Symposium on Software Reliability
Proceedings of the 30th international conference on Software engineering
On the Distribution of Software Faults
IEEE Transactions on Software Engineering
Empirical Software Engineering
IEEE Transactions on Software Engineering
Tracking concept drift of software projects using defect prediction quality
MSR '09 Proceedings of the 2009 6th IEEE International Working Conference on Mining Software Repositories
Cross-project defect prediction: a large scale experiment on data vs. domain vs. process
Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Fair and balanced?: bias in bug-fix datasets
Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
On the relative value of cross-company and within-company data for defect prediction
Empirical Software Engineering
Journal of Systems and Software
Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
Replication of defect prediction studies: problems, pitfalls and recommendations
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
The missing links: bugs and bug-fix commits
Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering
Dealing with noise in defect prediction
Proceedings of the 33rd International Conference on Software Engineering
BugCache for inspections: hit or miss?
Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering
Empirical Evaluation of Mixed-Project Defect Prediction Models
SEAA '11 Proceedings of the 2011 37th EUROMICRO Conference on Software Engineering and Advanced Applications
Transfer learning for cross-company software defect prediction
Information and Software Technology
An investigation on the feasibility of cross-project defect prediction
Automated Software Engineering
Local vs. global models for effort estimation and defect prediction
ASE '11 Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering
Ecological inference in empirical software engineering
ASE '11 Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering
Predicting software development errors using software complexity metrics
IEEE Journal on Selected Areas in Communications
Proceedings of the 2013 International Conference on Software Engineering
How, and why, process metrics are better
Proceedings of the 2013 International Conference on Software Engineering
Better cross company defect prediction
Proceedings of the 10th Working Conference on Mining Software Repositories
Sample size vs. bias in defect prediction
Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering
Training data selection for cross-project defect prediction
Proceedings of the 9th International Conference on Predictive Models in Software Engineering
An in-depth study of the potentially confounding effect of class size in fault prediction
ACM Transactions on Software Engineering and Methodology (TOSEM)
Hi-index | 0.00 |
There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!