Generalized probabilistic LR parsing of natural language (Corpora) with unification-based grammars
Computational Linguistics - Special issue on using large corpora: I
Stochastic attribute-value grammars
Computational Linguistics
EACL '95 Proceedings of the seventh conference on European chapter of the Association for Computational Linguistics
A probabilistic corpus-driven model for lexical-functional analysis
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Using restriction to extend parsing algorithms for complex-feature-based formalisms
ACL '85 Proceedings of the 23rd annual meeting on Association for Computational Linguistics
Estimation of stochastic attribute-value grammars using an informative sample
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
An HPSG-to-CFG approximation of Japanese
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
Estimators for stochastic "Unification-Based" grammars
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Practical issues in compiling typed unification grammars for speech recognition
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
A context-free superset approximation of unification-based grammars
New developments in parsing technology
From ubgs to cfgs a practical corpus-driven approach
Natural Language Engineering
Parsing '05 Proceedings of the Ninth International Workshop on Parsing Technology
Computational linguistics and natural language processing
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part I
Large-scale corpus-driven PCFG approximation of an HPSG
IWPT '11 Proceedings of the 12th International Conference on Parsing Technologies
Hi-index | 0.00 |
We present a novel disambiguation method for unification-based grammars (UBGs). In contrast to other methods, our approach obviates the need for probability models on the UBG side in that it shifts the responsibility to simpler context-free models, indirectly obtained from the UBG. Our approach has three advantages: (i) training can be effectively done in practice, (ii) parsing and disambiguation of context-free readings requires only cubic time, and (iii) involved probability distributions are mathematically clean. In an experiment for a mid-size UBG, we show that our novel approach is feasible. Using unsupervised training, we achieve 88% accuracy on an exact-match task.