The Detection of Fault-Prone Programs
IEEE Transactions on Software Engineering
The mythical man-month (anniversary ed.)
The mythical man-month (anniversary ed.)
A Validation of Object-Oriented Design Metrics as Quality Indicators
IEEE Transactions on Software Engineering
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Critique of Software Defect Prediction Models
IEEE Transactions on Software Engineering
Validation, Verification, and Testing of Computer Software
ACM Computing Surveys (CSUR)
Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering
Proceedings of the 24th International Conference on Software Engineering
Classification by Voting Feature Intervals
ECML '97 Proceedings of the 9th European Conference on Machine Learning
Predicting Fault-Prone Software Modules in Embedded Systems with Classification Trees
HASE '99 The 4th IEEE International Symposium on High-Assurance Systems Engineering
A Classification Scheme for Studies on Fault-Prone Components
PROFES '01 Proceedings of the Third International Conference on Product Focused Software Process Improvement
Fault Prediction Modeling for Software Quality Estimation: Comparing Commonly Used Techniques
Empirical Software Engineering
Software Inspection Benchmarking - A Qualitative and Quantitative Comparative Opportunity
METRICS '02 Proceedings of the 8th International Symposium on Software Metrics
What We Have Learned About Fighting Defects
METRICS '02 Proceedings of the 8th International Symposium on Software Metrics
Real-Time Concepts for Embedded Systems
Real-Time Concepts for Embedded Systems
ISESE '03 Proceedings of the 2003 International Symposium on Empirical Software Engineering
Benchmarking Attribute Selection Techniques for Discrete Class Data Mining
IEEE Transactions on Knowledge and Data Engineering
Using Machine Learning for Estimating the Defect Content After an Inspection
IEEE Transactions on Software Engineering
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
The Necessity of Assuring Quality in Software Measurement Data
METRICS '04 Proceedings of the Software Metrics, 10th International Symposium
Predicting the Location and Number of Faults in Large Software Systems
IEEE Transactions on Software Engineering
Constructing a Bayesian Belief Network to Predict Final Quality in Embedded System Development
IEICE - Transactions on Information and Systems
Ensemble of missing data techniques to improve software prediction accuracy
Proceedings of the 28th international conference on Software engineering
Predicting fault-prone components in a java legacy system
Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering
Assessment of a Multi-Strategy Classifier for an Embedded Software System
ICTAI '06 Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence
Predicting software defects in varying development lifecycles using Bayesian nets
Information and Software Technology
Enhancing software quality estimation using ensemble-classifier based noise filtering
Intelligent Data Analysis
Data Mining Static Code Attributes to Learn Defect Predictors
IEEE Transactions on Software Engineering
Comments on "Data Mining Static Code Attributes to Learn Defect Predictors"
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering
The role of replications in Empirical Software Engineering
Empirical Software Engineering
Can data transformation help in the detection of fault-prone modules?
DEFECTS '08 Proceedings of the 2008 workshop on Defects in large software systems
Ensemble of software defect predictors: a case study
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Formalisms in Software Engineering: Myths Versus Empirical Facts
Balancing Agility and Formalism in Software Engineering
IEEE Transactions on Software Engineering
Analysis of Naive Bayes' assumptions on software fault data: An empirical study
Data & Knowledge Engineering
An Investigation into the Functional Form of the Size-Defect Relationship for Software Modules
IEEE Transactions on Software Engineering
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
On the relative value of cross-company and within-company data for defect prediction
Empirical Software Engineering
Design and code inspections to reduce errors in program development
IBM Systems Journal
Introduction to Machine Learning
Introduction to Machine Learning
Ensemble methods for noise elimination in classification problems
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Predicting software defect density: a case study on automated static code analysis
XP'07 Proceedings of the 8th international conference on Agile processes in software engineering and extreme programming
Ensembles of pre-processing techniques for noise detection in gene expression data
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Ensemble missing data techniques for software effort prediction
Intelligent Data Analysis
Hi-index | 0.00 |
As the application layer in embedded systems dominates over the hardware, ensuring software quality becomes a real challenge. Software testing is the most time-consuming and costly project phase, specifically in the embedded software domain. Misclassifying a safe code as defective increases the cost of projects, and hence leads to low margins. In this research, we present a defect prediction model based on an ensemble of classifiers. We have collaborated with an industrial partner from the embedded systems domain. We use our generic defect prediction models with data coming from embedded projects. The embedded systems domain is similar to mission critical software so that the goal is to catch as many defects as possible. Therefore, the expectation from a predictor is to get very high probability of detection (pd). On the other hand, most embedded systems in practice are commercial products, and companies would like to lower their costs to remain competitive in their market by keeping their false alarm (pf) rates as low as possible and improving their precision rates. In our experiments, we used data collected from our industry partners as well as publicly available data. Our results reveal that ensemble of classifiers significantly decreases pf down to 15% while increasing precision by 43% and hence, keeping balance rates at 74%. The cost-benefit analysis of the proposed model shows that it is enough to inspect 23% of the code on local datasets to detect around 70% of defects.