Why we tag: motivations for annotation in mobile and online media
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
VideoReach: an online video recommendation system
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Classifying tags using open content resources
Proceedings of the Second ACM International Conference on Web Search and Data Mining
Proceedings of the 18th international conference on World wide web
Google challenge: incremental-learning for web video categorization on robust semantic feature space
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Hi-index | 0.00 |
Human annotations (titles and tags) of web videos facilitate most web video applications. However, the raw tags are noisy, sparse and structureless, which limit the effectiveness of tags. In this paper, we propose a tag transformer schema to solve these problems. We first eliminate those imprecise and meaningless tags with Wikipedia, and then transform the remaining tags to the Wikipedia category set to gather a precise, complete and structural description of the tags. Our experimental results on web video categorization demonstrate the superiority of the transformed space. We also apply tag transformer into the first study of using Wikipedia category system to structurally recommend the related videos. The online user study of the demo system suggests that our method could bring fantastic experience to the web users.