Structural ambiguity and lexical relations
Computational Linguistics - Special issue on using large corpora: I
Statistical models for unsupervised prepositional phrase attachment
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
A rule-based approach to prepositional phrase attachment disambiguation
COLING '94 Proceedings of the 15th conference on Computational linguistics - Volume 2
An unsupervised approach to prepositional phrase attachment using contextually similar words
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Hybrid parsing: using probabilistic models as predictors for a symbolic parser
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
The benefit of stochastic PP attachment to a rule-based parser
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Prepositions in applications: A survey and introduction to the special issue
Computational Linguistics
How bad is the problem of PP-attachment?: a comparison of English, German and Swedish
Prepositions '06 Proceedings of the Third ACL-SIGSEM Workshop on Prepositions
PP-attachment disambiguation boosted by a gigantic volume of unambiguous examples
IJCNLP'05 Proceedings of the Second international joint conference on Natural Language Processing
A hybrid approach to single and multiple PP attachment using wordnet
IJCNLP'05 Proceedings of the Second international joint conference on Natural Language Processing
Relation mining over a corpus of scientific literature
AIME'05 Proceedings of the 10th conference on Artificial Intelligence in Medicine
Simple semi-supervised learning for prepositional phrase attachment
IWPT '11 Proceedings of the 12th International Conference on Parsing Technologies
Hi-index | 0.00 |
Statistical methods for PP attachment fall into two classes according to the training material used: first, unsupervised methods trained on raw text corpora and second, supervised methods trained on manually disambiguated examples. Usually supervised methods win over unsupervised methods with regard to attachment accuracy. But what if only small sets of manually disambiguated material are available? We show that in this case it is advantageous to intertwine unsupervised and supervised methods into one disambiguation algorithm that outperforms both methods used alone.