Evaluating natural language generated database records

  • Authors:
  • Rita McCardell

  • Affiliations:
  • -

  • Venue:
  • HLT '90 Proceedings of the workshop on Speech and Natural Language
  • Year:
  • 1990

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the onslaught of various natural language processing (NLP) systems and their respective applications comes the inevitable task of determining a way in which to compare and thus evaluate the output of these systems. This paper focuses on one such evaluation technique that originated from the text understanding system called Project MURASAKI. This evaluation technique quantitatively and qualitatively measures the match (or distance) from the output of one text understanding system to the expected output of another.