Some advances in transformation-based part of speech tagging
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Implementing an efficient part-of-speech tagger
Software—Practice & Experience
Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text
Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text
ICG! '96 Proceedings of the 3rd International Colloquium on Grammatical Inference: Learning Syntax from Sentences
Part-of-Speech Tagging Using Progol
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Induction of Constraint Grammar-Rules Using Progol
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Learning Constraint Grammar-style disambiguation rules using inductive logic programming
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Word association norms, mutual information, and lexicography
ACL '89 Proceedings of the 27th annual meeting on Association for Computational Linguistics
Hi-index | 0.00 |
This paper reports the ongoing work of producing a state of the art part of speech tagger for unedited Swedish text. Rules eliminating faulty tags have been induced using Progol. In previously reported experiments, almost no linguistically motivated background knowledge was used [5,8]. Still, the result was rather promising (recall 97.7%, with a pending average ambiguity of 1.13 tags/word). Compared to the previous study, a much richer, more linguistically motivated, background knowledge has been supplied, consisting of examples of noun phrases, verb chains, auxiliary verbs, and sets of part of speech categories. The aim has been to create the background knowledge rapidly, without laborious hand-coding of linguistic knowledge. In addition to the new background knowledge, new, more expressive rule types have been induced for two part of speech categories and compared to the corresponding rules of the previous bottom-line experiment. The new rules perform considerably better, with a recall of 99.4% for the new rules, compared to 97.6% for the old rules. Precision was slightly better for the new rules.