Efficient and effective querying by image content
Journal of Intelligent Information Systems - Special issue: advances in visual information management systems
VisualSEEk: a fully automated content-based image query system
MULTIMEDIA '96 Proceedings of the fourth ACM international conference on Multimedia
Communications of the ACM
Managing gigabytes (2nd ed.): compressing and indexing documents and images
Managing gigabytes (2nd ed.): compressing and indexing documents and images
Content-based query of image databases: inspirations from text retrieval
Pattern Recognition Letters - Selected papers from the 11th scandinavian conference on image analysis
SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries
IEEE Transactions on Pattern Analysis and Machine Intelligence
Saliency, Scale and Image Description
International Journal of Computer Vision
Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Distributional word clusters vs. words for text categorization
The Journal of Machine Learning Research
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Effective and efficient object-based image retrieval using visual phrases
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
A survey of content-based image retrieval with high-level semantics
Pattern Recognition
Learning in region-based image retrieval
CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
Hi-index | 0.00 |
The performance of object-based image retrieval systems remains unsatisfactory, as it relies highly on visual similarity and regularity among images of same semantic class. In order to retrieve images beyond their visual appearances, we propose a novel image presentation, i.e. bag of visual synset. A visual synset is defined as a probabilistic relevance-consistent cluster of visual words (quantized vectors of region descriptors such as SIFT), in which the member visual words w induce similar semantic inference P(c|w) towards the image class c. The visual synset can be obtained by finding an optimal distributional clustering of visual words, based on Information Bottleneck principle. The testing on Caltech-256 datasets shows that by fusing the visual words in a relevance consistent way, the visual synset can partially bridge visual differences of images of same class and deliver satisfactory retrieval of relevant images with different visual appearances.