Label propagation through linear neighborhoods
ICML '06 Proceedings of the 23rd international conference on Machine learning
Image annotation by large-scale content-based image retrieval
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Supervised Learning of Semantic Classes for Image Annotation and Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-Time Computerized Annotation of Pictures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Annotating Images by Mining Image Search Results
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image annotation via graph learning
Pattern Recognition
Lire: lucene image retrieval: an extensible java CBIR library
MM '08 Proceedings of the 16th ACM international conference on Multimedia
A New Baseline for Image Annotation
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
Multiple Bernoulli relevance models for image and video annotation
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Ensemble approach based on conditional random field for multi-label image and video annotation
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Content-based annotation and classification framework: a general multi-purpose approach
Proceedings of the 17th International Database Engineering & Applications Symposium
Hi-index | 0.00 |
In this paper, we propose a novel image auto-annotation model using tag-related random search over range-constrained visual neighbors of the to-be-annotated image. The proposed model, termed as TagSearcher, observes that the annotating performances of many previous visual-neighbor-based models are generally sensitive to the quantity setting of visual neighbors, and the probabilities for visual neighbors to be selected is better to be tag-dependent, meaning that each candidate tag can have its own trustworthy part of visual neighbors for score prediction. And thus TagSearcher uses a constrained range rather than an identical and fixed number of visual neighbors for auto-annotation. By performing a novel tag-related random search process over the graphical model made up of range-constrained visual neighbors, TagSearcher can find the trustworthy part for each candidate tag, and further utilize both visual similarities and tag correlations for score prediction. With the range constraint for visual neighbors and the tag-related random search process, TagSearcher can not only achieve satisfactory annotating performances, but also reduce the performance sensitivity. Experiments conducted on benchmark Corel5k well demonstrate its rationality and effectiveness.