On the structural complexity of natural language sentences
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 2
Estimating annotation cost for active learning in a multi-annotator environment
HLT '09 Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing
Syntactic complexity measures for detecting mild cognitive impairment
BioNLP '07 Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing
Active learning with statistical models
Journal of Artificial Intelligence Research
Semi-supervised active learning for sequence labeling
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
Investigating the effects of selective sampling on the annotation task
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
A comparison of models for cost-sensitive active learning
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Recognition of understanding level and language skill using measurements of reading behavior
Proceedings of the 19th international conference on Intelligent User Interfaces
Hi-index | 0.00 |
We report on an experiment to track complex decision points in linguistic metadata annotation where the decision behavior of annotators is observed with an eye-tracking device. As experimental conditions we investigate different forms of textual context and linguistic complexity classes relative to syntax and semantics. Our data renders evidence that annotation performance depends on the semantic and syntactic complexity of the decision points and, more interestingly, indicates that full-scale context is mostly negligible - with the exception of semantic high-complexity cases. We then induce from this observational data a cognitively grounded cost model of linguistic meta-data annotations and compare it with existing non-cognitive models. Our data reveals that the cognitively founded model explains annotation costs (expressed in annotation time) more adequately than non-cognitive ones.