Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
INTIMATE: A Web-Based Movie Recommender Using Text Categorization
WI '03 Proceedings of the 2003 IEEE/WIC International Conference on Web Intelligence
Learning Object Categories from Google"s Image Search
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
A Hybrid Movie Recommender System Based on Neural Networks
ISDA '05 Proceedings of the 5th International Conference on Intelligent Systems Design and Applications
The Story Picturing Engine---a system for automatic text illustration
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Real-time computerized annotation of pictures
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Online video recommendation based on multimodal fusion and relevance feedback
Proceedings of the 6th ACM international conference on Image and video retrieval
CLUE: cluster-based retrieval of images by unsupervised learning
IEEE Transactions on Image Processing
Visual tag dictionary: interpreting tags with visual words
WSMC '09 Proceedings of the 1st workshop on Web-scale multimedia corpus
Tag dictionary and its applications
Proceedings of the international conference on Multimedia information retrieval
Automatic image semantic interpretation using social action and tagging data
Multimedia Tools and Applications
Large-scale web video shot ranking based on visual features and tag co-occurrence
Proceedings of the 21st ACM international conference on Multimedia
Automatic extraction of relevant video shots of specific actions exploiting Web data
Computer Vision and Image Understanding
Hi-index | 0.00 |
How might we benefit from the billions of tagged multimedia files (e.g. image, video, audio) available on the Internet? This paper presents a new concept called Web 2.0 Dictionary, a dynamic dictionary that takes advantage of, and is in fact built from, the huge database of tags available on the Web. The Web 2.0 Dictionary distinguishes itself from the traditional dictionary in six main ways: (1) it is fully automatic because it downloads tags from the Web and inserts this new information into the dictionary; (2) it is dynamic because each time a new shared image/video is uploaded, a "bag-of-tags" corresponding to the image/video will be downloaded, thus updating Web 2.0 Dictionary. The Web 2.0 Dictionary is literally updating every second, which is not true of the traditional dictionary; (3) it integrates all kinds of languages (e.g. English, Chinese), as long as the images/videos are tagged with words from such languages; (4) it is built by distilling a small amount of useful information from a massive and noisy tag database maintained by the entire Internet community, therefore the relatively small amount of noise present in the database will not affect it; (5) it truly reflects the most prevalent and relevant explanations in the world, unaffected by majoritarian views and political leanings. It is a real, free dictionary. Unlike Wikipedia" [5] which can be easily revised by even a single person, the Web 2.0 Dictionary is very stable because its contents are informed by a whole community of users that upload photo/videos; (6) it provides a correlation value between every two words ranging from 0 to 1. The correlation values stored in the dictionary have wide applications. We demonstrate the effectiveness of the Web 2.0 Dictionary for image/video understanding and retrieval, object categorization, tagging recommendation, etc, in this paper.