Quantifying IT forecast quality

  • Authors:
  • J. L. Eveleens;C. Verhoef

  • Affiliations:
  • VU University Amsterdam, Department of Computer Science, De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands;VU University Amsterdam, Department of Computer Science, De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands

  • Venue:
  • Science of Computer Programming
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article, we show how to quantify the quality of IT forecasts. First, we analyze two metrics previously proposed to analyze IT forecast data-Boehm's cone of uncertainty and DeMarco's Estimating Quality Factor. We show theoretical problems with the cone of uncertainty (for example, that the conical shape of Boehm's cone is not caused by improved estimation, but can also be found when estimation accuracy decreases), and generalize it as a family of distributions that predict IT forecasts on the basis of expected accuracy and predictive bias. With these, we support decision making by providing critical information on IT forecasting quality to IT governors. We illustrate that plotting forecast-to-actual ratios against a predicted distribution reveals potential biases, for instance political, involved with IT forecasting. We illustrate our approach by applying it to four real-world organizations (1824 projects, 12287 forecasts, 1059+ million Euro). We show that the distribution of forecast to actual ratios vary between organizations in at least three dimensions: in accuracy of estimation, in the tendency of forecasts to converge to the actual over the life of the project, and in systematic bias toward over- and underestimation. Moreover, we illustrate how to use the information to enrich forecast information for decision making. Finally, we point out that systematic biases, if not accounted for, make meaningless often-quoted rates of project success. We survey benchmarks related to forecasting and propose new benchmarks based on our extensive data.