Three new probabilistic models for dependency parsing: an exploration
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 1
Support Vector Learning for Semantic Argument Classification
Machine Learning
Online large-margin training of dependency parsers
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Semantic role labeling using dependency trees
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Labeling chinese predicates with semantic roles
Computational Linguistics
Adding semantic roles to the chinese treebank
Natural Language Engineering
The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
The CoNLL-2009 shared task: syntactic and semantic dependencies in multiple languages
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task
Incrementality in deterministic dependency parsing
IncrementParsing '04 Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together
Semantic role lableing system using maximum entropy classifier
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Very high accuracy and fast dependency parsing is not a contradiction
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
A web knowledge based approach for complex question answering
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
Hi-index | 0.00 |
This paper describes a pipelined approach for CoNLL-09 shared task on joint learning of syntactic and semantic dependencies. In the system, we handle syntactic dependency parsing with a transition-based approach and utilize MaltParser as the base model. For SRL, we utilize a Maximum Entropy model to identify predicate senses and classify arguments. Experimental results show that the average performance of our system for all languages achieves 67.81% of macro F1 Score, 78.01% of syntactic accuracy, 56.69% of semantic labeled F1, 71.66% of macro precision and 64.66% of micro recall.