Scale-Space and Edge Detection Using Anisotropic Diffusion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Exploiting image semantics for picture libraries
Proceedings of the 1st ACM/IEEE-CS joint conference on Digital libraries
Modern Information Retrieval
Unifying Keywords and Visual Contents in Image Retrieval
IEEE MultiMedia
Learning-based linguistic indexing of pictures with 2--d MHMMs
Proceedings of the tenth ACM international conference on Multimedia
Combining Textual and Visual Cues for Content-Based Image Retrieval on the World Wide Web
CBAIVL '98 Proceedings of the IEEE Workshop on Content - Based Access of Image and Video Libraries
Using dual cascading learning frameworks for image indexing
VIP '05 Proceedings of the Pan-Sydney area workshop on Visual information processing
Discovering recurrent image semantics from class discrimination
EURASIP Journal on Applied Signal Processing
Hi-index | 0.00 |
One of the major problems with existing content-based image retrieval systems is that their objective similarity can rarely match the user's subjective and context-dependent similarity interpretation. We propose a novel approach to linking images and textual information to overcome this problem. First, salient image objects and also their structural and visual features are extracted. Next, keywords and images are linked in two stages: (1) by mapping low-level visual features of objects to related words using feature lexicons, and (2) by assigning words expressing higher-level semantic concepts to images on the basis of the feature-related words, lexical definitions, and the user's relevance feedback. Experimental results show that the user's retrieval semantics can be approximated better via this two-level multi-modality and also by supporting a large variety of querying and browsing schemes and thus higher-level interactivity.