Image annotations by combining multiple evidence & wordNet
Proceedings of the 13th annual ACM international conference on Multimedia
Flickr tag recommendation based on collective knowledge
Proceedings of the 17th international conference on World Wide Web
Automatic image annotation using visual content and folksonomies
Multimedia Tools and Applications
Proceedings of the 18th international conference on World wide web
WSMC '09 Proceedings of the 1st workshop on Web-scale multimedia corpus
NUS-WIDE: a real-world web image database from National University of Singapore
Proceedings of the ACM International Conference on Image and Video Retrieval
Learning social tag relevance by neighbor voting
IEEE Transactions on Multimedia
New trends and ideas in visual concept detection: the MIR flickr retrieval evaluation initiative
Proceedings of the international conference on Multimedia information retrieval
MAP-based image tag recommendation using a visual folksonomy
Pattern Recognition Letters
Evaluating Color Descriptors for Object and Scene Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic image semantic interpretation using social action and tagging data
Multimedia Tools and Applications
Content-based tag processing for Internet social images
Multimedia Tools and Applications
Social negative bootstrapping for visual categorization
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
Your mileage may vary: on the limits of social media
SIGSPATIAL Special
Visual and semantic similarity in ImageNet
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Hi-index | 0.00 |
Given that the presence of non-relevant tags in an image folksonomy hampers the effective organization and retrieval of images, this paper discusses a novel technique for estimating the relevance of user-supplied tags with respect to the content of a seed image. Specifically, this paper proposes to compute the relevance of image tags by making use of both visually similar and dissimilar images. That way, compared to tag relevance estimation only using visually similar images, the difference in tag relevance between tags relevant and tags irrelevant with respect to the content of a seed image can be increased at a limited increase in computational cost, thus making it more straightforward to distinguish between them. The latter is confirmed through experimentation with subsets of MIRFLICKR-25000 and MIRFLICKR-1M, showing that tag relevance estimation using both visually similar and dissimilar images allows achieving more effective image tag refinement and tag-based image retrieval than tag relevance estimation only using visually similar images.