Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis
IEEE Transactions on Software Engineering - Special Issue on Artificial Intelligence in Software Applications
Software metrics for object-oriented systems
CSC '92 Proceedings of the 1992 ACM annual conference on Communications
Characterizing and modeling the cost of rework in a library of reusable software components
ICSE '97 Proceedings of the 19th international conference on Software engineering
An investigation into coupling measures for C++
ICSE '97 Proceedings of the 19th international conference on Software engineering
Modeling and managing risk early in software development
ICSE '93 Proceedings of the 15th international conference on Software Engineering
A Metrics Suite for Object Oriented Design
IEEE Transactions on Software Engineering
Using Rule Sets to Maximize ROC Performance
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Software Project Management Net: A New Methodology on Software Management
COMPSAC '98 Proceedings of the 22nd International Computer Software and Applications Conference
Search Heuristics, Case-based Reasoning And Software Project Effort Prediction
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Reusability Hypothesis Verification using Machine Learning Techniques: A Case Study
ASE '98 Proceedings of the 13th IEEE international conference on Automated software engineering
Predicting Software Stability Using Case-Based Reasoning
Proceedings of the 17th IEEE international conference on Automated software engineering
Evaluating the Impact of Object-Oriented Design on Software Quality
METRICS '96 Proceedings of the 3rd International Symposium on Software Metrics: From Measurement to Empirical Results
IEEE Transactions on Software Engineering
Ant Colony Optimization
Genetic granular classifiers in modeling software quality
Journal of Systems and Software
Search-based software test data generation: a survey: Research Articles
Software Testing, Verification & Reliability
Predicting the Probability of Change in Object-Oriented Systems
IEEE Transactions on Software Engineering
Predicting fault-prone components in a java legacy system
Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering
Software project management with GAs
Information Sciences: an International Journal
Search Algorithms for Regression Test Case Prioritization
IEEE Transactions on Software Engineering
Data Mining Techniques for Building Fault-proneness Models in Telecom Java Software
ISSRE '07 Proceedings of the The 18th IEEE International Symposium on Software Reliability
Fault Prediction using Early Lifecycle Data
ISSRE '07 Proceedings of the The 18th IEEE International Symposium on Software Reliability
IEEE Transactions on Software Engineering
A hybrid heuristic approach to optimize rule-based software quality estimation models
Information and Software Technology
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Journal of Systems and Software
Predicting stability of classes in an object-oriented system
Journal of Computational Methods in Sciences and Engineering - Special Supplement Issue in Section A and B: Selected Papers from the ISCA International Conference on Software Engineering and Data Engineering, 2009
A study of subgroup discovery approaches for defect prediction
Information and Software Technology
Hi-index | 0.00 |
Context: Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data. Objective: The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data. Method: This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels. Results: Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it. Conclusion: Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.