Evaluating text quality: judging output texts without a clear source

  • Authors:
  • Anthony Hartley;Donia Scott

  • Affiliations:
  • University of Brighton, UK;University of Brighton, UK

  • Venue:
  • EWNLG '01 Proceedings of the 8th European workshop on Natural Language Generation - Volume 8
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider how far two attributes of text quality commonly used in MT evaluation -- intelligibility and fidelity -- apply within NLG. While the former appears to transfer directly, the latter needs to be completely re-interpreted. We make a crucial distinction between the needs of symbolic authors and those of end-readers. We describe a form of textual feedback, based on a controlled language used for specifying software requirements that appears well suited to authors' needs, and an approach for incrementally improving the fidelity of this feedback text to the content model.