An empirical validation of software cost estimation models
Communications of the ACM
Software engineering metrics and models
Software engineering metrics and models
Calibrating estimation tools for software development
Software Engineering Journal
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Estimates, Uncertainty, and Risk
IEEE Software
Human Performance Estimating with Analogy and Regression Models: An Empirical Validation
METRICS '98 Proceedings of the 5th International Symposium on Software Metrics
A method of programming measurement and estimation
IBM Systems Journal
IEEE Transactions on Software Engineering - Special section on the seventh international software metrics symposium
Comparing Software Prediction Techniques Using Simulation
IEEE Transactions on Software Engineering - Special section on the seventh international software metrics symposium
Computational intelligence as an emerging paradigm of software engineering
SEKE '02 Proceedings of the 14th international conference on Software engineering and knowledge engineering
Combining techniques to optimize effort predictions in software project management
Journal of Systems and Software
A Systematic Review of Software Development Cost Estimation Studies
IEEE Transactions on Software Engineering
Mining software repositories for comprehensible software fault prediction models
Journal of Systems and Software
Selection of strategies in judgment-based effort estimation
Journal of Systems and Software
Expert Systems with Applications: An International Journal
Information Sciences: an International Journal
On using planning poker for estimating user stories
Journal of Systems and Software
Is lines of code a good measure of effort in effort-aware models?
Information and Software Technology
Prediction of faults-slip-through in large software projects: an empirical evaluation
Software Quality Control
Hi-index | 0.00 |
Building and evaluating predictionsystems is an important activity for software engineering researchers.Increasing numbers of techniques and datasets are now being madeavailable. Unfortunately systematic comparison is hindered bythe use of different accuracy indicators and evaluation processes.We argue that these indicators are statistics that describe propertiesof the estimation errors or residuals and that the sensible choiceof indicator is largely governed by the goals of the estimator.For this reason it may be helpful for researchers to providea range of indicators. We also argue that it is useful to formallytest for significant differences between competing predictionsystems and note that where only a few cases are available thiscan be problematic, in other words the research instrument mayhave insufficient power. We demonstrate that this is the casefor a well known empirical study of cost models. Simulation,however, could be one means of overcoming this difficulty.