Evaluating text categorization
HLT '91 Proceedings of the workshop on Speech and Natural Language
Hi-index | 0.00 |
With the onslaught of various natural language processing (NLP) systems and their respective applications comes the inevitable task of determining a way in which to compare and thus evaluate the output of these systems. This paper focuses on one such evaluation technique that originated from the text understanding system called Project MURASAKI. This evaluation technique quantitatively and qualitatively measures the match (or distance) from the output of one text understanding system to the expected output of another.