Software engineering metrics and models
Software engineering metrics and models
A Procedure for Analyzing Unbalanced Datasets
IEEE Transactions on Software Engineering
Case Studies for Method and Tool Evaluation
IEEE Software
Using Simulation to Evaluate Prediction Techniques
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
Further Comparison of Cross-Company and Within-Company Effort Estimation Models for Web Applications
METRICS '04 Proceedings of the Software Metrics, 10th International Symposium
Investigating Web size metrics for early Web cost estimation
Journal of Systems and Software
Proceedings of the 16th international conference on World Wide Web
Cross versus Within-Company Cost Estimation Studies: A Systematic Review
IEEE Transactions on Software Engineering
Cross-company vs. single-company web effort models using the Tukutuku database: An extended study
Journal of Systems and Software
A study of project selection and feature weighting for analogy based software cost estimation
Journal of Systems and Software
On the relative value of cross-company and within-company data for defect prediction
Empirical Software Engineering
When to use data from other projects for effort estimation
Proceedings of the IEEE/ACM international conference on Automated software engineering
How to Find Relevant Data for Effort Estimation?
ESEM '11 Proceedings of the 2011 International Symposium on Empirical Software Engineering and Measurement
Local vs. global models for effort estimation and defect prediction
ASE '11 Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering
Exploiting the Essential Assumptions of Analogy-Based Effort Estimation
IEEE Transactions on Software Engineering
Hi-index | 0.00 |
This study investigates to what extent Web effort estimation models built using cross-company data sets can provide suitable effort estimates for Web projects belonging to another company, when compared to Web effort estimates obtained using that company's own data on their past projects (single-company data set). It extends a previous study (S3) where these same research questions were investigated using data on 67 Web projects from the Tukutuku database. Since S3 was carried out, data on other 128 Web projects was added to Tukutuku; therefore this study uses the entire set of 195 projects from the Tukutuku database, which now also includes new data from other single-company data sets. Predictions between cross-company and single-company models are compared using Manual Stepwise Regression+Linear Regression and Case-Based Reasoning. In addition, we also investigated to what extent applying a filtering mechanism to cross-company datasets prior to building prediction models can affect the accuracy of the effort estimates they provide. The present study corroborates the conclusions of S3 since the cross-company models provided much worse predictions than the single-company models. Moreover, the use of the filtering mechanism significantly improved the prediction accuracy of cross-company models when estimating single-company projects, making it comparable to that using single-company datasets.