An empirical validation of software cost estimation models
Communications of the ACM
Software sizing and estimating: Mk II FPA (Function Point Analysis)
Software sizing and estimating: Mk II FPA (Function Point Analysis)
Empirical studies of assumptions that underlie software cost-estimation models
Information and Software Technology
Software metrics (2nd ed.): a rigorous and practical approach
Software metrics (2nd ed.): a rigorous and practical approach
Bayesian Analysis of Empirical Software Engineering Cost Models
IEEE Transactions on Software Engineering
Software Engineering Economics
Software Engineering Economics
Software Development Cost Estimation Using Function Points
IEEE Transactions on Software Engineering
Learning How to Improve Effort Estimation in Small Software Development Companies
COMPSAC '00 24th International Computer Software and Applications Conference
Can Results from Software Engineering Experiments be Safely Combined?
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
Building A Software Cost Estimation Model Based On Categorical Data
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
Practical Statistics for Medical Research
Practical Statistics for Medical Research
Assessing Variation in Development Effort Consistency Using a Data Source with Missing Data
Software Quality Control
Handling categorical variables in effort estimation
Proceedings of the ACM-IEEE international symposium on Empirical software engineering and measurement
Hi-index | 0.00 |
By and large, given the inherent subjectivity in defining and measuring factors used in algorithmic effort estimation methods, when algorithmic methods produce consistent estimates it seems reasonable to assume that this is in part due to estimator experience. Further, software development factors are usually assumed to have different degrees of influence on actual effort. For example, no specific allowances for program language or problem domain were made in the original COCOMO model or in Albrecht's Function Points, whilst allowances for development mode in COCOMO and function type complexity for Albrecht's Function Points are crucial. However, work has been conducted that concluded that 4GLs are associated with higher productivity than 3GLs. Clearly, we can support such conclusions about productivity, since, for example, it usually requires less effort to develop a database using a purposely designed DBMS product than it does using a 3GL. However, in general, for a given problem domain an appropriate development language and platform will be selected. Hence, we might feel that an appropriate development language will not be a factor that influences estimate consistency unduly, given that an estimator has experience of the problem domain. However, algorithmic methods usually require calibration to different problem domains. Calibration may be needed because the method was originally designed using data from another type of domain. Furthermore, estimators' estimation consistency within problem domains may be affected for one or more reasons. Intuitively, reasons might include: estimators lack estimation experience in some domains; or the development team(s) may have different levels of experience in different domains, which the estimator finds difficult to take into account. We demonstrate how, in general, the influence of problem domain may be assessed using a Hierarchical Bayesian inference procedure. We also show how values can be derived to account for variations in estimate consistency in problem domains.