Adding semantic roles to the chinese treebank
Natural Language Engineering
Multilingual dependency analysis with a two-stage discriminative parser
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
Parsing syntactic and semantic dependencies with two single-stage maximum entropy models
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
The CoNLL-2009 shared task: syntactic and semantic dependencies in multiple languages
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task
Multilingual dependency learning: a huge feature engineering method to semantic dependency parsing
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task
Multilingual dependency learning: a huge feature engineering method to semantic dependency parsing
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Hedge detection and scope finding by sequence labeling with normalized feature selection
CoNLL '10: Shared Task Proceedings of the Fourteenth Conference on Computational Natural Language Learning --- Shared Task
The latent words language model
Computer Speech and Language
Integrative semantic dependency parsing via efficient large-scale feature selection
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
This paper describes our system about multilingual syntactic and semantic dependency parsing for our participation in the joint task of CoNLL-2009 shared tasks. Our system uses rich features and incorporates various integration technologies. The system is evaluated on in-domain and out-of-domain evaluation data of closed challenge of joint task. For in-domain evaluation, our system ranks the second for the average macro labeled F1 of all seven languages, 82.52% (only about 0.1% worse than the best system), and the first for English with macro labeled F1 87.69%. And for out-of-domain evaluation, our system also achieves the second for average score of all three languages.