Class-based n-gram models of natural language
Computational Linguistics
Automatic labeling of semantic roles
Computational Linguistics
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Semantic role labeling: an introduction to the special issue
Computational Linguistics
Dependency-based syntactic-semantic analysis with PropBank and NomBank
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
The effect of syntactic representation on semantic role labeling
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
LTH: semantic structure extraction using nonprojective dependency trees
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
Aspect Guided Text Categorization with Unobserved Labels
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
A comparative study on generalization of semantic roles in FrameNet
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Probabilistic frame-semantic parsing
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Hi-index | 0.00 |
Semantic role classification accuracy for most languages other than English is constrained by the small amount of annotated data. In this paper, we demonstrate how the frame-to-frame relations described in the FrameNet ontology can be used to improve the performance of a FrameNet-based semantic role classifier for Swedish, a low-resource language. In order to make use of the FrameNet relations, we cast the semantic role classification task as a non-atomic label prediction task. The experiments show that the cross-frame generalization methods lead to a 27% reduction in the number of errors made by the classifier. For previously unseen frames, the reduction is even more significant: 50%.