Automatic evaluation of computer generated text: a progress report on the TextEval project

  • Authors:
  • Chris Brew;Henry S. Thompson

  • Affiliations:
  • University of Edinburgh, Scotland;University of Edinburgh, Scotland

  • Venue:
  • HLT '94 Proceedings of the workshop on Human Language Technology
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present results of experiments designed to assess the usefulness of a new technique for the evaluation of translation quality, comparing human rankings with automatic measures. The basis of our approach is the use of a standard set and the adoption of a statistical view of translation quality. This approach has the ability to provide evaluations which avoid dependence on any particular theory of translation, which are therefore potentially more objective than previous techniques. The work presented here was supported by the Science and Engineering and the Social and Economic Research Councils of Great Britain, and would not have been possible without the gracious assistance of Ian Mason of Heriot Watt University, Edinburgh.