Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis
IEEE Transactions on Software Engineering - Special Issue on Artificial Intelligence in Software Applications
A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Selected papers of the sixth annual Oregon workshop on Software metrics
A Validation of Object-Oriented Design Metrics as Quality Indicators
IEEE Transactions on Software Engineering
Machine Learning for the Detection of Oil Spills in Satellite Radar Images
Machine Learning - Special issue on applications of machine learning and the knowledge discovery process
Machine Learning
Combining and Adapting Software Quality Predictive Models by Genetic Algorithms
Proceedings of the 17th IEEE international conference on Automated software engineering
Tree-Based Software Quality Estimation Models For Fault Prediction
METRICS '02 Proceedings of the 8th International Symposium on Software Metrics
Predicting Fault-Prone Modules with Case-Based Reasoning
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
METRICS '03 Proceedings of the 9th International Symposium on Software Metrics
Robust Prediction of Fault-Proneness by Random Forests
ISSRE '04 Proceedings of the 15th International Symposium on Software Reliability Engineering
Nearest neighbor sampling for better defect prediction
PROMISE '05 Proceedings of the 2005 workshop on Predictor models in software engineering
Empirical Assessment of Machine Learning based Software Defect Prediction Techniques
WORDS '05 Proceedings of the 10th IEEE International Workshop on Object-Oriented Real-Time Dependable Systems
Building Defect Prediction Models in Practice
IEEE Software
Early quality monitoring in the development of real-time reactive systems
Journal of Systems and Software
Revisiting the evaluation of defect prediction models
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Misclassification cost-sensitive fault prediction models
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Improving software-quality predictions with data sampling and boosting
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Programmer-based fault prediction
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
Predicting high-risk program modules by selecting the right software measurements
Software Quality Control
Ecological inference in empirical software engineering
ASE '11 Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering
How, and why, process metrics are better
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
Many statistical techniques have been proposed and introduced to predict fault-proneness of program modules in software engineering. Choosing the "best" candidate among many available models involves performance assessment and detailed comparison. But these comparisons are not simple due to varying performance measures and the related verification and validation cost implications. Therefore, a methodology for precise definition and evaluation of the predictive models is still needed. We believe the procedure we outline here, if followed, has a potential to enhance the statistical validity of future experiments.