Why we tag: motivations for annotation in mobile and online media
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Flickr tag recommendation based on collective knowledge
Proceedings of the 17th international conference on World Wide Web
Motivating contributors in social media networks
WSM '09 Proceedings of the first SIGMM workshop on Social media
Crowdsourcing the assembly of concept hierarchies
Proceedings of the 10th annual joint conference on Digital libraries
Exploring iterative and parallel human computation processes
Proceedings of the ACM SIGKDD Workshop on Human Computation
Exploring the use of crowdsourcing to support empirical studies in software engineering
Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
Visual-semantic graphs: using queries to reduce the semantic gap in web image retrieval
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Evaluating Annotators Consistency with the Aid of an Innovative Database Schema
SMAP '11 Proceedings of the 2011 Sixth International Workshop on Semantic Media Adaptation and Personalization
Annotating images with suggestions: user study of a tagging system
ACIVS'12 Proceedings of the 14th international conference on Advanced Concepts for Intelligent Vision Systems
Hi-index | 0.00 |
Crowdsourcing has been widely used to generate metadata for multimedia resources. By presenting partially described resources to human annotators, resources are tagged yielding better descriptions. Although significant improvements in metadata quality have been reported, as yet there is no understanding of how taggers are biased by previously acquired resource tags. We hypothesize that the number of existing annotations, which we take here to reflect the tag completeness degree, influence taggers: rather empty descriptions (initial tagging stages) encourage creating more tags, but better tags are created for fuller descriptions (later tagging stages). We explore empirically the relationship between tag quality/quantity and completeness degree by conducting a study on a set of human crowdsourcing annotators over a collection of images with different completeness degrees. Experimental results show a significant relation between completeness and image tagging. To the best of our knowledge, this study is the first to explore the impact of existing annotations on image tagging.