Processing unknown words in HPSG
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Lexicon acquisition with a large-coverage unification-based grammar
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 2
Error mining for wide-coverage grammar engineering
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Error mining in parsing results
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Using unknown word techniques to learn known words
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Classifying French verbs using French and English lexical resources
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
The effectiveness of parsers based on manually created resources, namely a grammar and a lexicon, rely mostly on the quality of these resources. Thus, increasing the parser coverage and precision usually implies improving these two resources. Their manual improvement is a time consuming and complex task: identifying which resource is the true culprit for a given mistake is not always obvious, as well as finding the mistake and correcting it. Some techniques, like van Noord (2004) or Sagot and Villemonte de La Clergerie (2006), bring a convenient way to automatically identify forms having potentially erroneous entries in a lexicon. We have integrated and extended such techniques in a wider process which, thanks to the grammar ability to tell how these forms could be used as part of correct parses, is able to propose lexical corrections for the identified entries. We present in this paper an implementation of this process and discuss the main results we have obtained on a syntactic wide-coverage French lexicon.