Genetic algorithms + data structures = evolution programs (2nd, extended ed.)
Genetic algorithms + data structures = evolution programs (2nd, extended ed.)
A Validation of Object-Oriented Design Metrics as Quality Indicators
IEEE Transactions on Software Engineering
Exploring the relationship between design measures and software quality in object-oriented systems
Journal of Systems and Software
Robust Classification for Imprecise Environments
Machine Learning
A Metrics Suite for Object Oriented Design
IEEE Transactions on Software Engineering
Learning Decision Trees Using the Area Under the ROC Curve
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Using Rule Sets to Maximize ROC Performance
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
A Comprehensive Empirical Validation of Design Measures for Object-Oriented Systems
METRICS '98 Proceedings of the 5th International Symposium on Software Metrics
An Empirical Validation of Object-Oriented Metrics in Two Different Iterative Software Processes
IEEE Transactions on Software Engineering
Machine-Learning Techniques for Software Product Quality Assessment
QSIC '04 Proceedings of the Quality Software, Fourth International Conference
Robust Prediction of Fault-Proneness by Random Forests
ISSRE '04 Proceedings of the 15th International Symposium on Software Reliability Engineering
Application of neural networks for software quality prediction using object-oriented metrics
Journal of Systems and Software
A Novel Method for Early Software Quality Prediction Based on Support Vector Machine
ISSRE '05 Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Software Testing, Verification & Reliability - UKTest 2005: The Third U.K. Workshop on Software Testing Research
Predicting software defects in varying development lifecycles using Bayesian nets
Information and Software Technology
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Data Mining Static Code Attributes to Learn Defect Predictors
IEEE Transactions on Software Engineering
Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults
IEEE Transactions on Software Engineering
Empirical Analysis of Software Fault Content and Fault Proneness Using Bayesian Methods
IEEE Transactions on Software Engineering
Applying machine learning to software fault-proneness prediction
Journal of Systems and Software
Mining software repositories for comprehensible software fault prediction models
Journal of Systems and Software
Predicting defect-prone software modules using support vector machines
Journal of Systems and Software
IEEE Transactions on Software Engineering
Predicting Fault Proneness of Classes Trough a Multiobjective Particle Swarm Optimization Algorithm
ICTAI '08 Proceedings of the 2008 20th IEEE International Conference on Tools with Artificial Intelligence - Volume 02
Hypothesis testing with classifier systems for rule-based risk prediction
EvoBIO'07 Proceedings of the 5th European conference on Evolutionary computation, machine learning and data mining in bioinformatics
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
An investigation on the feasibility of cross-project defect prediction
Automated Software Engineering
Proceedings of the 8th International Conference on Predictive Models in Software Engineering
DConfusion: a technique to allow cross study performance evaluation of fault prediction studies
Automated Software Engineering
Hi-index | 0.00 |
In the literature the fault-proneness of classes or methods has been used to devise strategies for reducing testing costs and efforts. In general, fault-proneness is predicted through a set of design metrics and, most recently, by using Machine Learning (ML) techniques. However, some ML techniques cannot deal with unbalanced data, characteristic very common of the fault datasets and, their produced results are not easily interpreted by most programmers and testers. Considering these facts, this paper introduces a novel fault-prediction approach based on Multiobjective Particle Swarm Optimization (MOPSO). Exploring Pareto dominance concepts, the approach generates a model composed by rules with specific properties. These rules can be used as an unordered classifier, and because of this, they are more intuitive and comprehensible. Two experiments were accomplished, considering, respectively, fault-proneness of classes and methods. The results show interesting relationships between the studied metrics and fault prediction. In addition to this, the performance of the introduced MOPSO approach is compared with other ML algorithms by using several measures including the area under the ROC curve, which is a relevant criterion to deal with unbalanced data.