The Philosophy of Information Retrieval Evaluation
CLEF '01 Revised Papers from the Second Workshop of the Cross-Language Evaluation Forum on Evaluation of Cross-Language Information Retrieval Systems
Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Tagging video: conventions and strategies of the YouTube community
Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries
A comparison of statistical significance tests for information retrieval evaluation
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
Tagging and searching: Search retrieval effectiveness of folksonomies on the World Wide Web
Information Processing and Management: an International Journal
The Probabilistic Relevance Framework: BM25 and Beyond
Foundations and Trends in Information Retrieval
Efficiently scaling up video annotation with crowdsourced marketplaces
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Design and implementation of relevance assessments using crowdsourcing
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
In search of quality in crowdsourcing for search engine evaluation
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
On the role of user-generated metadata in audio visual collections
Proceedings of the sixth international conference on Knowledge capture
Linking user generated video annotations to the web of data
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
Quality through flow and immersion: gamifying crowdsourced relevance assessments
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
Games with a purpose (GWAPs) are increasingly used in audio-visual collections as a mechanism for annotating videos through tagging. This trend is driven by the assumption that user tags will improve video search. In this paper we study whether this is indeed the case. To this end, we create an evaluation dataset that consists of: (i) a set of videos tagged by users via video labelling game, (ii) a set of queries derived from real-life query logs, and (iii) relevance judgements. Besides user tags from the labelling game, we exploit the existing metadata associated with the videos (textual descriptions and curated in-house tags) and closed captions. Our findings show that search based on user tags alone outperforms search based on all other metadata types. Combining user tags with the other types of metadata yields an increase in search performance of 33%. We also find that the search performance of user tags steadily increases as more tags are collected.