An Algorithm that Learns What‘s in a Name
Machine Learning - Special issue on natural language learning
Maximum entropy models for natural language ambiguity resolution
Maximum entropy models for natural language ambiguity resolution
ICSC '07 Proceedings of the International Conference on Semantic Computing
Inter-coder agreement for computational linguistics
Computational Linguistics
Novel semantic features for verb sense disambiguation
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
NAACL-Short '06 Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers
From annotator agreement to noise models
Computational Linguistics
Learning with annotation noise
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
IJCNLP'05 Proceedings of the Second international joint conference on Natural Language Processing
On the development of the RST Spanish Treebank
LAW V '11 Proceedings of the 5th Linguistic Annotation Workshop
Compensating for annotation errors in training a relation extractor
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Short answer assessment: establishing links between research strands
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP
Hi-index | 0.00 |
The common accepted wisdom is that blind double annotation followed by adjudication of disagreements is necessary to create training and test corpora that result in the best possible performance. We provide evidence that this is unlikely to be the case. Rather, the greatest value for your annotation dollar lies in single annotating more data.