Estimators for stochastic "Unification-Based" grammars
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Parsing the WSJ using CCG and log-linear models
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Maximum entropy estimation for feature forests
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Parse selection with a German HPSG grammar
PaGe '08 Proceedings of the Workshop on Parsing German
Creating an annotated corpus for generating walking directions
UCNLG+Sum '09 Proceedings of the 2009 Workshop on Language Generation and Summarisation
Hard constraints for grammatical function labelling
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Underspecifying and predicting voice for surface realisation ranking
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Reversible stochastic attribute-value grammars
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
Hi-index | 0.00 |
We present a log-linear model for the disambiguation of the analyses produced by a German broad-coverage LFG, focussing on the properties (or features) this model is based on. We compare this model to an initial model based only on a part of the properties provided to the final model and observe that the performance of a log-linear model for parse selection depends heavily on the types of properties that it is based on. In our case, the error reduction achieved with the log-linear model based on the extended set of properties is 51.0% and thus compares very favorably to the error reduction of 34.5% achieved with the initial model.