An empirical validation of software cost estimation models
Communications of the ACM
Software engineering metrics and models
Software engineering metrics and models
Function point analysis
Software Size Estimation of Object-Oriented Systems
IEEE Transactions on Software Engineering
Estimation of information systems development efforts: a pilot study
Information and Management
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Estimating software costs
An assessment and comparison of common software cost estimation modeling techniques
Proceedings of the 21st international conference on Software engineering
A Controlled Experiment to Assess the Benefits of Estimating with Analogy and Regression Models
IEEE Transactions on Software Engineering
Function point analysis: measurement practices for successful software projects
Function point analysis: measurement practices for successful software projects
Rapid Development: Taming Wild Software Schedules
Rapid Development: Taming Wild Software Schedules
Software Engineering Economics
Software Engineering Economics
Frequently Begged Questions and How to Answer Them
IEEE Software
IEEE Software
Experience With the Accuracy of Software Maintenance Task Effort Prediction Models
IEEE Transactions on Software Engineering
A Further Empirical Investigation of the Relationship Between MRE and Project Size
Empirical Software Engineering
Agile Estimating and Planning
IT Failure Rates--70% or 10-15%?
IEEE Software
Software Measurement and Estimation: A Practical Approach (Quantitative Software Engineering Series)
Software Measurement and Estimation: A Practical Approach (Quantitative Software Engineering Series)
The Standish report: does it really describe a software crisis?
Communications of the ACM - Music information retrieval
Letters: The Cone of Uncertainty
IEEE Software
IT Professional
Software Estimation: Demystifying the Black Art
Software Estimation: Demystifying the Black Art
Quantifying the effects of IT-governance rules
Science of Computer Programming
The impact of size and volatility on IT project performance
Communications of the ACM
A General Empirical Solution to the Macro Software Sizing and Estimating Problem
IEEE Transactions on Software Engineering
Communications of the ACM - Urban sensing: out of the woods
Quantifying requirements volatility effects
Science of Computer Programming
Software Engineering: Principles and Practice
Software Engineering: Principles and Practice
A review of studies on expert estimation of software development effort
Journal of Systems and Software
Quantifying IT estimation risks
Science of Computer Programming
Quantifying forecast quality of IT business value
Science of Computer Programming
PROFES'12 Proceedings of the 13th international conference on Product-Focused Software Process Improvement
Hi-index | 0.00 |
In this article, we show how to quantify the quality of IT forecasts. First, we analyze two metrics previously proposed to analyze IT forecast data-Boehm's cone of uncertainty and DeMarco's Estimating Quality Factor. We show theoretical problems with the cone of uncertainty (for example, that the conical shape of Boehm's cone is not caused by improved estimation, but can also be found when estimation accuracy decreases), and generalize it as a family of distributions that predict IT forecasts on the basis of expected accuracy and predictive bias. With these, we support decision making by providing critical information on IT forecasting quality to IT governors. We illustrate that plotting forecast-to-actual ratios against a predicted distribution reveals potential biases, for instance political, involved with IT forecasting. We illustrate our approach by applying it to four real-world organizations (1824 projects, 12287 forecasts, 1059+ million Euro). We show that the distribution of forecast to actual ratios vary between organizations in at least three dimensions: in accuracy of estimation, in the tendency of forecasts to converge to the actual over the life of the project, and in systematic bias toward over- and underestimation. Moreover, we illustrate how to use the information to enrich forecast information for decision making. Finally, we point out that systematic biases, if not accounted for, make meaningless often-quoted rates of project success. We survey benchmarks related to forecasting and propose new benchmarks based on our extensive data.