A unified framework for semantics and feature based relevance feedback in image retrieval systems
MULTIMEDIA '00 Proceedings of the eighth ACM international conference on Multimedia
A vector space model for automatic indexing
Communications of the ACM
Unifying Keywords and Visual Contents in Image Retrieval
IEEE MultiMedia
Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
DIRICHLET MIXTURES: A METHOD FOR IMPROVING DETECTION OF WEAK BUT SIGNIFICANT PROTEIN SEQUENCE HOMOLOGY
Narrowing the semantic gap - improved text-based web document retrieval using visual features
IEEE Transactions on Multimedia
User-Centered image semantics classification
ADMA'06 Proceedings of the Second international conference on Advanced Data Mining and Applications
An approach of multi-level semantics abstraction
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Hi-index | 0.00 |
Nowadays, access to information requires to manageeffectively multimedia databases, and among challengesoffered to scientific community since last decades,multimedia retrieval techniques (particularly imagesretrieval) are became an active research direction.Introduced to overcome the main drawbacksencountered by text-based images retrieval, which arethe subjective and manual annotation of images, contentbased images retrieval (CBIR) systems index imagesaccording to low-level visual features such as color,texture, shape to retrieve similar images. However,despite the progress achieved in the content based imageretrieval, in particular with the relevance feedbackapproach where the user refine the search via thespecification of relevant or not relevant items, thecurrent CBIR systems still have a major difficulty that ithas yet to overcome: how to negotiate the "semanticgap"? This problem comes from the mismatch betweentheir capabilities and the needs of users.In this paper, we address the problem of how relate low-levelfeatures to high level to bring out semanticconcepts from images. Our aim is to combine content-basedand metadata-based approaches for imageretrieval from a user perspective to yield better resultsand overcome to the lacks of these techniques whenthey are taken separately. To represent the semanticcontent of images, we propose a model which takesaccount of the interaction between the user and themetadata. In particular, we model the semantic user'preference by analyzing its answers through theRelevance Feedback process. Furthermore, weintroduce a new machine learning technique that modifythe weights (i.e. relative importance) of metadatarepresenting the semantic content of images.