Hard-to-use evaluation criteria for software engineering

  • Authors:
  • Richard Hamlet

  • Affiliations:
  • University of Maryland USA

  • Venue:
  • Journal of Systems and Software
  • Year:
  • 1981

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most evaluations of software tools and methodologies could be called ''public relations,'' because they are subjective arguments given by proponents. The need for markedly increased productivity in software development is now forcing better evaluation criteria to be used. Software engineering must begin to live up to its second name by finding quantitative measures of quality. This paper suggests some evaluation criteria that are probably too difficult to carry out, criteria that may always remain subjective. It argues that these are so important that we should keep them in mind as a balance to the hard data we can obtain and should seek to learn more about them despite the difficulty of doing so. A historical example is presented as illustration of the necessity of retaining subjective criteria. High-level languages and their compilers today enjoy almost universal acceptance. It will be argued that the value of this tool has never been precisely evaluated, and if narrow measures had been applied at its inception, it would have been found wanting. This historical lesson is then applied to the problem of evaluating a novel specification and testing tool under development at the University of Maryland.