On Building Prediction Systems for Software Engineers

  • Authors:
  • Martin Shepperd;Michelle Cartwright;Gada Kadoda

  • Affiliations:
  • Empirical Software Engineering Research Group, School of Design, Engineering and Computing, Bournemouth University, Talbot Campus, BH12 5BB, UK;Empirical Software Engineering Research Group, School of Design, Engineering and Computing, Bournemouth University, Talbot Campus, BH12 5BB, UK;Empirical Software Engineering Research Group, School of Design, Engineering and Computing, Bournemouth University, Talbot Camput, BH12 5BB, UK

  • Venue:
  • Empirical Software Engineering
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Building and evaluating predictionsystems is an important activity for software engineering researchers.Increasing numbers of techniques and datasets are now being madeavailable. Unfortunately systematic comparison is hindered bythe use of different accuracy indicators and evaluation processes.We argue that these indicators are statistics that describe propertiesof the estimation errors or residuals and that the sensible choiceof indicator is largely governed by the goals of the estimator.For this reason it may be helpful for researchers to providea range of indicators. We also argue that it is useful to formallytest for significant differences between competing predictionsystems and note that where only a few cases are available thiscan be problematic, in other words the research instrument mayhave insufficient power. We demonstrate that this is the casefor a well known empirical study of cost models. Simulation,however, could be one means of overcoming this difficulty.