Software engineering metrics and models
Software engineering metrics and models
A Pattern Recognition Approach for Software Engineering Data Analysis
IEEE Transactions on Software Engineering - Special issue on software measurement principles, techniques, and environments
IEEE Transactions on Software Engineering - Special issue on software reliability
Robust regression for developing software estimation models
Journal of Systems and Software
Machine Learning Approaches to Estimating Software Development Effort
IEEE Transactions on Software Engineering
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
A Procedure for Analyzing Unbalanced Datasets
IEEE Transactions on Software Engineering
Effort estimation and prediction of object-oriented systems
Journal of Systems and Software
Explaining the cost of European space and military projects
Proceedings of the 21st international conference on Software engineering
An assessment and comparison of common software cost estimation modeling techniques
Proceedings of the 21st international conference on Software engineering
A Controlled Experiment to Assess the Benefits of Estimating with Analogy and Regression Models
IEEE Transactions on Software Engineering
A replicated assessment and comparison of common software cost modeling techniques
Proceedings of the 22nd international conference on Software engineering
An investigation of machine learning based prediction systems
Journal of Systems and Software - Special issue on empirical studies of software development and evolution
Software Cost Estimation with Incomplete Data
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering - Special section on the seventh international software metrics symposium
Comparing Software Prediction Techniques Using Simulation
IEEE Transactions on Software Engineering - Special section on the seventh international software metrics symposium
An Empirical Study of Analogy-based Software Effort Estimation
Empirical Software Engineering
Empirical Software Engineering
Experience With the Accuracy of Software Maintenance Task Effort Prediction Models
IEEE Transactions on Software Engineering
A Further Empirical Investigation of the Relationship Between MRE and Project Size
Empirical Software Engineering
Identifying High Performance ERP Projects
IEEE Transactions on Software Engineering
Human Performance Estimating with Analogy and Regression Models: An Empirical Validation
METRICS '98 Proceedings of the 5th International Symposium on Software Metrics
An Investigation of Analysis Techniques for Software Datasets
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
Using Public Domain Metrics To Estimate Software Development Effort
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
A Simulation Study of the Model Evaluation Criterion MMRE
IEEE Transactions on Software Engineering
A flexible method for software effort estimation by analogy
Empirical Software Engineering
Predicting object-oriented software maintainability using multivariate adaptive regression splines
Journal of Systems and Software
Decision Support Analysis for Software Effort Estimation by Analogy
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
IEEE Transactions on Software Engineering
Applying machine learning to software fault-proneness prediction
Journal of Systems and Software
Comparing cost prediction models by resampling techniques
Journal of Systems and Software
Confidence in software cost estimation results based on MMRE and PRED
Proceedings of the 4th international workshop on Predictor models in software engineering
An empirical validation of a neural network model for software effort estimation
Expert Systems with Applications: An International Journal
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
A study of project selection and feature weighting for analogy based software cost estimation
Journal of Systems and Software
An early software-quality classification based on improved grey relational classifier
Expert Systems with Applications: An International Journal
Improved estimation of software project effort using multiple additive regression trees
Expert Systems with Applications: An International Journal
Using uncertainty as a model selection and comparison criterion
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Synthesis, Analysis, and Modeling of Large-Scale Mission-Critical Embedded Software Systems
ICSP '09 Proceedings of the International Conference on Software Process: Trustworthy Software Development Processes
A study of the non-linear adjustment for analogy based software cost estimation
Empirical Software Engineering
Scope error detection and handling concerning software estimation models
ESEM '09 Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
Empirical Software Engineering
Stable rankings for different effort models
Automated Software Engineering
Analytics for software development
Proceedings of the FSE/SDP workshop on Future of software engineering research
Combining techniques for software quality classification: An integrated decision network approach
Expert Systems with Applications: An International Journal
Human judgement and software metrics: vision for the future
Proceedings of the 2nd International Workshop on Emerging Trends in Software Metrics
A bayesian network based approach for software defects prediction
ACM SIGSOFT Software Engineering Notes
Systematic literature review of machine learning based software development effort estimation models
Information and Software Technology
Validity and reliability of evaluation procedures in comparative studies of effort prediction models
Empirical Software Engineering
A replicated assessment and comparison of adaptation techniques for analogy-based effort estimation
Empirical Software Engineering
On the dataset shift problem in software engineering prediction models
Empirical Software Engineering
Empirical Software Engineering
Special issue on repeatable results in software engineering prediction
Empirical Software Engineering
Searching for rules to detect defective modules: A subgroup discovery approach
Information Sciences: an International Journal
Evaluating prediction systems in software project estimation
Information and Software Technology
Evaluating defect prediction approaches: a benchmark and an extensive comparison
Empirical Software Engineering
Size doesn't matter?: on the value of software size features for effort estimation
Proceedings of the 8th International Conference on Predictive Models in Software Engineering
Empirical Software Engineering
Functional Link Artificial Neural Networks for Software Cost Estimation
International Journal of Applied Evolutionary Computation
Software effort models should be assessed via leave-one-out validation
Journal of Systems and Software
On the value of outlier elimination on software effort estimation research
Empirical Software Engineering
Information and Software Technology
A study of subgroup discovery approaches for defect prediction
Information and Software Technology
Finding conclusion stability for selecting the best effort predictor in software effort estimation
Automated Software Engineering
Software defect prediction using Bayesian networks
Empirical Software Engineering
Applications of fuzzy integrals for predicting software fault-prone
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.01 |
Empirical studies on software prediction models do not converge with respect to the question "which prediction model is best?驴 The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models.