Automatic labeling of semantic roles
Computational Linguistics
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
An empirical study of the domain dependence of supervised word sense disambiguation systems
EMNLP '00 Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13
The Proposition Bank: An Annotated Corpus of Semantic Roles
Computational Linguistics
Reranking and self-training for parser adaptation
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Deep learning via semi-supervised embedding
Proceedings of the 25th international conference on Machine learning
A global joint model for semantic role labeling
Computational Linguistics
The importance of syntactic parsing and inference in semantic role labeling
Computational Linguistics
Semi-supervised semantic role labeling
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Unsupervised discovery of a statistical verb lexicon
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Locating complex named entities in web text
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
MAP adaptation of stochastic grammars
Computer Speech and Language
Unsupervised argument identification for Semantic Role Labeling
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Distributional representations for handling sparsity in supervised sequence-labeling
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Graph alignment for semi-supervised semantic role labeling
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Semi-supervised semantic role labeling using the latent words language model
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Introduction to the CoNLL-2005 shared task: semantic role labeling
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Semantic role labelling with tree conditional random fields
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Semantic role chunking combining complementary syntactic views
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
A theory of learning from different domains
Machine Learning
Language models as representations for weakly-supervised NLP tasks
CoNLL '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning
Adapting text instead of the model: an open domain approach
CoNLL '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning
Collective semantic role labeling for tweets with clustering
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Using regression for spectral estimation of HMMs
SLSP'13 Proceedings of the First international conference on Statistical Language and Speech Processing
Hi-index | 0.00 |
Most supervised language processing systems show a significant drop-off in performance when they are tested on text that comes from a domain significantly different from the domain of the training data. Semantic role labeling techniques are typically trained on newswire text, and in tests their performance on fiction is as much as 19% worse than their performance on newswire text. We investigate techniques for building open-domain semantic role labeling systems that approach the ideal of a train-once, use-anywhere system. We leverage recently-developed techniques for learning representations of text using latent-variable language models, and extend these techniques to ones that provide the kinds of features that are useful for semantic role labeling. In experiments, our novel system reduces error by 16% relative to the previous state of the art on out-of-domain text.