Robust regression for developing software estimation models
Journal of Systems and Software
An assessment and comparison of common software cost estimation modeling techniques
Proceedings of the 21st international conference on Software engineering
A Simulation Tool for Efficient Analogy Based Cost Estimation
Empirical Software Engineering
Experience With the Accuracy of Software Maintenance Task Effort Prediction Models
IEEE Transactions on Software Engineering
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
A review of studies on expert estimation of software development effort
Journal of Systems and Software
Journal of Systems and Software
Realism in Assessment of Effort Estimation Uncertainty: It Matters How You Ask
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering
Evidence-Based Guidelines for Assessment of Software Development Cost Uncertainty
IEEE Transactions on Software Engineering
Using uncertainty as a model selection and comparison criterion
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Hi-index | 0.00 |
This paper describes models whose purpose is to explain the accuracy and bias variation of an organization’s estimates of software development effort through regression analysis. We collected information about variables that we believed would affect the accuracy or bias of estimates of the performance of tasks completed by the organization. In total, information about 49 software development tasks was collected. We found that the following conditions led to inaccuracies in estimates: (1) Estimates were provided by a person in the role of “software developer” instead of “project leader”, (2) The project had as its highest priority time-to-delivery instead of quality or cost, and (3) The estimator did not participate in the completion of the task. The following conditions led to an increased bias towards under-estimation: (1) Estimates were provided by a person with the role of “project leader” instead of “software developer”. (2) The estimator assessed the accuracy of own estimates of similar, previously completed tasks to be low (more than 20% error). Although all variables included in the models were significant p