Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Classification of user image descriptions
International Journal of Human-Computer Studies
Designing games with a purpose
Communications of the ACM - Designing games with a purpose
Today's and tomorrow's retrieval practice in the audiovisual archive
Proceedings of the ACM International Conference on Image and Video Retrieval
User-generated metadata in audio-visual collections
Proceedings of the 21st international conference companion on World Wide Web
Linking user generated video annotations to the web of data
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
Personal image tagging: a game-based approach
Proceedings of the 8th International Conference on Semantic Systems
Nichesourcing: harnessing the power of crowds of experts
EKAW'12 Proceedings of the 18th international conference on Knowledge Engineering and Knowledge Management
An evaluation of labelling-game data for video retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Agent-mediated shared conceptualizations in tagging services
Multimedia Tools and Applications
Using explicit discourse rules to guide video enrichment
Proceedings of the 22nd international conference on World Wide Web companion
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.01 |
Recently, various crowdsourcing initiatives showed that targeted efforts of user communities result in massive amounts of tags. For example, the Netherlands Institute for Sound and Vision collected a large number of tags with the video labeling game Waisda?. To successfully utilize these tags, a better understanding of their characteristics is required. The goal of this paper is twofold: (i) to investigate the vocabulary that users employ when describing videos and compare it to the vocabularies used by professionals; and (ii) to establish which aspects of the video are typically described and what type of tags are used for this. We report on an analysis of the tags collected with Waisda?. With respect to the first goal, we compared the the tags with a typical domain thesaurus used by professionals, as well as with a more general vocabulary. With respect to the second goal, we compare the tags to the video subtitles to determine how many tags are derived from the audio signal. In addition, we perform a qualitative study in which a tag sample is interpreted in terms of an existing annotation classification framework. The results suggest that the tags complement the metadata provided by professional cataloguers, the tags describe both the audio and the visual aspects of the video, and the users primarily describe objects in the video using general descriptions.