Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis
IEEE Transactions on Software Engineering - Special Issue on Artificial Intelligence in Software Applications
Reformulating and calibrating COCOMO
Journal of Systems and Software
C4.5: programs for machine learning
C4.5: programs for machine learning
Learning decision tree classifiers
ACM Computing Surveys (CSUR)
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
An investigation on the use of machine learned models for estimating correction costs
Proceedings of the 20th international conference on Software engineering
A Controlled Experiment to Assess the Benefits of Estimating with Analogy and Regression Models
IEEE Transactions on Software Engineering
A replicated assessment and comparison of common software cost modeling techniques
Proceedings of the 22nd international conference on Software engineering
Software Engineering Economics
Software Engineering Economics
On Building Prediction Systems for Software Engineers
Empirical Software Engineering
Predicting Maintenance Effort with Function Points
ICSM '97 Proceedings of the International Conference on Software Maintenance
Metrics for Database Systems: An Empirical Study
METRICS '97 Proceedings of the 4th International Symposium on Software Metrics
Can Results from Software Engineering Experiments be Safely Combined?
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
An Investigation of Analysis Techniques for Software Datasets
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
Effort estimation for corrective software maintenance
SEKE '02 Proceedings of the 14th international conference on Software engineering and knowledge engineering
Analogy-Based Practical Classification Rules for Software Quality Estimation
Empirical Software Engineering
Do adaptation rules improve web cost estimation?
Proceedings of the fourteenth ACM conference on Hypertext and hypermedia
Comparative Assessment of Software Quality Classification Techniques: An Empirical Case Study
Empirical Software Engineering
Assessment of a New Three-Group Software Quality Classification Technique: An Empirical Case Study
Empirical Software Engineering
Reliability and Validity in Comparative Studies of Software Prediction Models
IEEE Transactions on Software Engineering
An empirical study of predicting software faults with case-based reasoning
Software Quality Control
Improving the COCOMO model using a neuro-fuzzy approach
Applied Soft Computing
Detecting noisy instances with the rule-based classification model
Intelligent Data Analysis
Software project economics: a roadmap
FOSE '07 2007 Future of Software Engineering
Predicting defect-prone software modules using support vector machines
Journal of Systems and Software
Exploring case-based reasoning for web hypermedia project cost estimation
International Journal of Web Engineering and Technology
Combining probabilistic models for explanatory productivity estimation
Information and Software Technology
An empirical validation of a neural network model for software effort estimation
Expert Systems with Applications: An International Journal
Journal of Computational Methods in Sciences and Engineering - Selected papers from the International Conference on Computer Science,Software Engineering, Information Technology, e-Business, and Applications, 2003
Predicting Software Fault Proneness Model Using Neural Network
PROFES '08 Proceedings of the 9th international conference on Product-Focused Software Process Improvement
A study of project selection and feature weighting for analogy based software cost estimation
Journal of Systems and Software
Software quality analysis by combining multiple projects and learners
Software Quality Control
A study of the non-linear adjustment for analogy based software cost estimation
Empirical Software Engineering
Empirical Evaluation of Hunk Metrics as Bug Predictors
IWSM '09 /Mensura '09 Proceedings of the International Conferences on Software Process and Product Measurement
A defect prediction model for software based on service oriented architecture using expert COCOMO
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
Evolutionary data analysis for the class imbalance problem
Intelligent Data Analysis
Selection of strategies in judgment-based effort estimation
Journal of Systems and Software
ICSP'08 Proceedings of the Software process, 2008 international conference on Making globally distributed software development a success story
Stable rankings for different effort models
Automated Software Engineering
Adaptive ridge regression system for software cost estimating on multi-collinear datasets
Journal of Systems and Software
A second look at Faster, Better, Cheaper
Innovations in Systems and Software Engineering
Software effort estimation based on optimized model tree
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
A principled evaluation of ensembles of learning machines for software effort estimation
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
Information and Software Technology
Adjusted case-based software effort estimation using bees optimization algorithm
KES'11 Proceedings of the 15th international conference on Knowledge-based and intelligent information and engineering systems - Volume Part II
Systematic literature review of machine learning based software development effort estimation models
Information and Software Technology
A replicated assessment and comparison of adaptation techniques for analogy-based effort estimation
Empirical Software Engineering
On the dataset shift problem in software engineering prediction models
Empirical Software Engineering
Searching for rules to detect defective modules: A subgroup discovery approach
Information Sciences: an International Journal
Evaluation of three methods to predict project success: a case study
PROFES'05 Proceedings of the 6th international conference on Product Focused Software Process Improvement
Computational intelligence in software cost estimation: an emerging paradigm
ACM SIGSOFT Software Engineering Notes
Genetic algorithm for optimizing neural network based software cost estimation
SEMCCO'11 Proceedings of the Second international conference on Swarm, Evolutionary, and Memetic Computing - Volume Part I
StatREC: a graphical user interface tool for visual hypothesis testing of cost prediction models
Proceedings of the 8th International Conference on Predictive Models in Software Engineering
Empirical Software Engineering
Functional Link Artificial Neural Networks for Software Cost Estimation
International Journal of Applied Evolutionary Computation
Software effort prediction using fuzzy clustering and functional link artificial neural networks
SEMCCO'12 Proceedings of the Third international conference on Swarm, Evolutionary, and Memetic Computing
Software effort models should be assessed via leave-one-out validation
Journal of Systems and Software
On the value of outlier elimination on software effort estimation research
Empirical Software Engineering
Information and Software Technology
A study of subgroup discovery approaches for defect prediction
Information and Software Technology
LMES: A localized multi-estimator model to estimate software development effort
Engineering Applications of Artificial Intelligence
Finding conclusion stability for selecting the best effort predictor in software effort estimation
Automated Software Engineering
Software defect prediction using Bayesian networks
Empirical Software Engineering
Hi-index | 0.00 |
The need for accurate software prediction systems increases as software becomes much larger and more complex. A variety of techniques have been proposed; however, none has proven consistently accurate and there is still much uncertainty as to what technique suits which type of prediction problem. We believe that the underlying characteristics驴size, number of features, type of distribution, etc.驴of the data set influence the choice of the prediction system to be used. For this reason, we would like to control the characteristics of such data sets in order to systematically explore the relationship between accuracy, choice of prediction system, and data set characteristic. Also, in previous work, it has proven difficult to obtain significant results over small data sets. Consequently, it would be useful to have a large validation data set. Our solution is to simulate data allowing both control and the possibility of large (1,000) validation cases. In this paper, we compared four prediction techniques: regression, rule induction, nearest neighbor (a form of case-based reasoning), and neural nets. The results suggest that there are significant differences depending upon the characteristics of the data set. Consequently, researchers should consider prediction context when evaluating competing prediction systems. We also observed that the more 驴messy驴 the data and the more complex the relationship with the dependent variable, the more variability in the results. In the more complex cases, we observed significantly different results depending upon the particular training set that has been sampled from the underlying data set. This suggests that researchers will need to exercise caution when comparing different approaches and utilize procedures such as bootstrapping in order to generate multiple samples for training purposes. However, our most important result is that it is more fruitful to ask which is the best prediction system in a particular context rather than which is the 驴best驴 prediction system.