Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
Kernel Principal Component Analysis
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Improved Nyström low-rank approximation and error analysis
Proceedings of the 25th international conference on Machine learning
A New Baseline for Image Annotation
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
Understanding tag-cloud and visual features for better annotation of concepts in NUS-WIDE dataset
WSMC '09 Proceedings of the 1st workshop on Web-scale multimedia corpus
Real-time bag of words, approximately
Proceedings of the ACM International Conference on Image and Video Retrieval
NUS-WIDE: a real-world web image database from National University of Singapore
Proceedings of the ACM International Conference on Image and Video Retrieval
Efficient large-scale image annotation by probabilistic collaborative multi-label propagation
Proceedings of the international conference on Multimedia
Automatic generation of semantic fields for annotating web images
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Foundations and Trends® in Computer Graphics and Vision
Random forest for image annotation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Editorial: Modifications of the construction and voting mechanisms of the Random Forests Algorithm
Data & Knowledge Engineering
Hi-index | 0.00 |
This paper introduces random forest as a computational and data structure paradigm for fusing low-level visual features and high-level semantic concepts for image retrieval. We use visual features to split the tree nodes and use the image labels to supervise the splitting to make images located at the same tree node share similar semantic concepts as well as visual similarities. We exploit such a random forest and define the semantic neighbor set (SNS) of a given image as the union of all images in the leaf nodes that this image falls onto. From SNS we further define the semantic similarity measure (SSM) between two images as the number of trees in which they share the same leaf nodes within a SNS. With SNS and SSM, example-based image retrieval becomes that of first finding the SNS of the querying image and then ranking the images according to the SSMs between the querying image and images in its SNS. We also show that the new technique can be adapted for keyword-based semantic image retrieval. The inherent efficient tree data structure leads to fast solutions. We will present experimental results to show the effectiveness of this new semantic image retrieval technique.