Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Multi-model similarity propagation and its application for web image retrieval
Proceedings of the 12th annual ACM international conference on Multimedia
Image annotation refinement using random walk with restarts
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Proceedings of the 18th international conference on World wide web
Descriptive visual words and visual phrases for image applications
MM '09 Proceedings of the 17th ACM international conference on Multimedia
NUS-WIDE: a real-world web image database from National University of Singapore
Proceedings of the ACM International Conference on Image and Video Retrieval
Refining image retrieval using one-class classification
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Spatial coding for large scale partial-duplicate web image search
Proceedings of the international conference on Multimedia
Semi-automatic flickr group suggestion
MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part II
Exploring tag relevance for image tag re-ranking
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Query expansion enhancement by fast binary matching
Proceedings of the 20th ACM international conference on Multimedia
Hi-index | 0.00 |
The large amount of user-tagged images on social networks is helpful to facilitate image management and image search. However, many tags are weakly relevant or irrelevant to the visual content, resulting in unsatisfactory performance in tag related applications. In this paper, we propose a coupled probability transition algorithm to estimate the text-visual group relevance from the observed data and then leverage it to predict tag relevance for a new query image. The visual group for a given tag is a cluster of images that are visually similar and share the same tag. The tag-visual group relevance is uncovered by exploiting the mutual reinforcement in visual space and semantic space alternatively. Experiments on NUS-WIDE dataset show the validity and superiority of the proposed approach over existing methods.