Automatic labeling of semantic roles
Computational Linguistics
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Using predicate-argument structures for information extraction
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Identifying semantic roles using Combinatory Categorial Grammar
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
The Proposition Bank: An Annotated Corpus of Semantic Roles
Computational Linguistics
Semantic role labeling via integer linear programming inference
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Generalized inference with multiple semantic role labeling systems
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Semantic role labeling via consensus in pattern-matching
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Semantic role labeling using support vector machines
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Hi-index | 0.00 |
This paper demonstrates two methods to improve the performance of instance-based learning (IBL) algorithms for the problem of Semantic Role Labeling (SRL). Two IBL algorithms are utilized: k-Nearest Neighbor (kNN), and Priority Maximum Likelihood (PML) with a modified back-off combination method. The experimental data are the WSJ23 and Brown Corpus test sets from the CoNLL-2005 Shared Task. It is shown that applying the Tree-Based Predicate-Argument Recognition Algorithm (PARA) to the data as a preprocessing stage allows kNN and PML to deliver F1: 68.61 and 71.02 respectively on the WSJ23, and F1: 56.96 and 60.55 on the Brown Corpus; an increase of 8.28 in F1 measurement over the most recent published PML results for this problem (Palmer et al., 2005). Training times for IBL algorithms are very much faster than for other widely used techniques for SRL (e.g. parsing, support vector machines, perceptrons, etc); and the feature reduction effects of PARA yield testing and processing speeds of around 1.0 second per sentence for kNN and 0.9 second per sentence for PML respectively, suggesting that IBL could be a more practical way to perform SRL for NLP applications where it is employed; such as realtime Machine Translation or Automatic Speech Recognition.