Adequate and Precise Evaluation of Quality Models in Software Engineering Studies
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Techniques for evaluating fault prediction models
Empirical Software Engineering
Analysing Bug Prediction Capabilities of Static Code Metrics in Open Source Software
IWSM/Metrikon/Mensura '08 Proceedings of the International Conferences on Software Process and Product Measurement
Modeling software evolution defects: a time series approach
Journal of Software Maintenance and Evolution: Research and Practice
On modeling software defect repair time
Empirical Software Engineering
Validation of network measures as indicators of defective modules in software systems
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Misclassification cost-sensitive fault prediction models
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Quantifying event correlations for proactive failure management in networked computing systems
Journal of Parallel and Distributed Computing
Review: Software fault prediction: A literature review and current trends
Expert Systems with Applications: An International Journal
Mining software defect data to support software testing management
Applied Intelligence
Empirical validation of human factors in predicting issue lead time in open source projects
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
A survey in the area of machine learning and its application for software quality prediction
ACM SIGSOFT Software Engineering Notes
Software defect prediction using relational association rule mining
Information Sciences: an International Journal
Prediction of faults-slip-through in large software projects: an empirical evaluation
Software Quality Control
Hi-index | 0.00 |
The wide-variety of real-time software systems, including telecontrol/telepresence systems, robotic systems, and mission planning systems, can entail dynamic code synthesis based on runtime mission-specific requirements and operating conditions. This necessitates the need for dynamic dependability assessment to ensure that these systems will perform as specified and will not fail in catastrophic ways. One approach in achieving this is to dynamically assess the modules in the synthesized code using software defect prediction techniques. Statistical models, such as Stepwise Multi-linear Regression models and multivariate models, and machine learning approaches, such as Artificial Neural Networks, Instance-based Reasoning, Bayesian-Belief Networks, Decision Trees, and Rule Inductions, have been investigated for predicting software quality. However, there is still no consensus about the best predictor model for software defects. In this paper, we evaluate different predictor models on four different real-time software defect data sets. The results show that a combination of 1R and Instance-based Learning along with the Consistencybased Subset Evaluation technique provides relatively better consistency in accuracy prediction compared to other models. The results also show that "size" and "complexity" metrics are not sufficient for accurately predicting real-time software defects.