Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Efficient annotation with the Jena ANnotation Environment (JANE)
LAW '07 Proceedings of the Linguistic Annotation Workshop
Active learning for part-of-speech tagging: accelerating corpus annotation
LAW '07 Proceedings of the Linguistic Annotation Workshop
LAW '07 Proceedings of the Linguistic Annotation Workshop
Building Chinese sense annotated corpus with the help of software tools
LAW '07 Proceedings of the Linguistic Annotation Workshop
Measuring annotator agreement in a complex hierarchical dialogue act annotation scheme
SigDIAL '06 Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue
Influence of pre-annotation on POS-tagged corpus development
LAW IV '10 Proceedings of the Fourth Linguistic Annotation Workshop
Proposal for an extension of traditional named entities: from guidelines to evaluation, an overview
LAW V '11 Proceedings of the 5th Linguistic Annotation Workshop
Language Resources and Evaluation
Usability recommendations for annotation tools
LAW VI '12 Proceedings of the Sixth Linguistic Annotation Workshop
Hi-index | 0.00 |
Alternative paths to linguistic annotation, such as those utilizing games or exploiting the web users, are becoming popular in recent times owing to their very high benefit-to-cost ratios. In this paper, however, we report a case study on POS annotation for Bangla and Hindi, where we observe that reliable linguistic annotation requires not only expert annotators, but also a great deal of supervision. For our hierarchical POS annotation scheme, we find that close supervision and training is necessary at every level of the hierarchy, or equivalently, complexity of the tagset. Nevertheless, an intelligent annotation tool can significantly accelerate the annotation process and increase the inter-annotator agreement for both expert and non-expert annotators. These findings lead us to believe that reliable annotation requiring deep linguistic knowledge (e.g., POS, chunking, Treebank, semantic role labeling) requires expertise and supervision. The focus, therefore, should be on design and development of appropriate annotation tools equipped with machine learning based predictive modules that can significantly boost the productivity of the annotators.