Robust Processing of Natural Language
KI '95 Proceedings of the 19th Annual German Conference on Artificial Intelligence: Advances in Artificial Intelligence
TnT: a statistical part-of-speech tagger
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
Towards a more careful evaluation of broad coverage parsing systems
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 1
Exploring evidence for shallow parsing
ConLL '01 Proceedings of the 2001 workshop on Computational Natural Language Learning - Volume 7
Robust parsing using dynamic programming
CIAA'03 Proceedings of the 8th international conference on Implementation and application of automata
GenERRate: generating errors for use in grammatical error detection
EdAppsNLP '09 Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications
Hi-index | 0.00 |
This article describes an automatic evaluation procedure for NLP system robustness under the strain of noisy and ill-formed input. The procedure requires no manual work or annotated resources. It is language and annotation scheme independent and produces reliable estimates on the robustness of NLP systems. The only requirement is an estimate on the NLP system accuracy. The procedure was applied to five parsers and one part-of-speech tagger on Swedish text. To establish the reliability of the procedure, a comparative evaluation involving annotated resources was carried out on the tagger and three of the parsers.