SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries
IEEE Transactions on Pattern Analysis and Machine Intelligence
MindReader: Querying Databases Through Multiple Examples
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Pattern Recognition Methods in Image and Video Databases: Past, Present and Future
SSPR '98/SPR '98 Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Long-term learning of semantic grouping from relevance-feedback
Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval
A novel log-based relevance feedback technique in content-based image retrieval
Proceedings of the 12th annual ACM international conference on Multimedia
Content-based image retrieval: approaches and trends of the new age
Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
A survey of content-based image retrieval with high-level semantics
Pattern Recognition
Learning a Maximum Margin Subspace for Image Retrieval
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Multimedia
Bridging the Gap: Query by Semantic Example
IEEE Transactions on Multimedia
Active concept learning in image databases
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
CLUE: cluster-based retrieval of images by unsupervised learning
IEEE Transactions on Image Processing
Learning a semantic space from user's relevance feedback for image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
An important problem in Content Based Image Retrieval (CBIR) systems is the gap between the human high-level semantics and the low-level machine features. In this paper, we develop a novel approach based on the intuition that a query along with the responses from the user during a relevance feedback session provides sufficient cues for learning multiple high-level concepts associated with the query image. For example, a single query image showing a yellow rose contains several high-level semantics such as yellow roses, any rose flower, any yellow coloured flower, a flower, a flower in front-view, etc. Unlike the past approaches that modelled positive responses from the user as a single class with a unimodal probability distribution function, we show that it is more appropriate to group them into multiple connected components in the feature space. It is demonstrated that these components capture and differentiate between the various semantics of an image. We also show that these components may be computed automatically by using a Gaussian Mixture Model. Results on several images illustrate the potential of these connected components to capture the multiple semantics of an image.