MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Ontology driven content based image retrieval
Proceedings of the 6th ACM international conference on Image and video retrieval
Image retrieval: Ideas, influences, and trends of the new age
ACM Computing Surveys (CSUR)
Deriving a large scale taxonomy from Wikipedia
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
DBpedia: a nucleus for a web of open data
ISWC'07/ASWC'07 Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference
Overview of the WikipediaMM task at ImageCLEF 2008
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Query expansion with conceptnet and wordnet: an intrinsic comparison
AIRS'06 Proceedings of the Third Asia conference on Information Retrieval Technology
Multimodal image retrieval over a large database
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
Hi-index | 0.00 |
Image retrieval in large-scale databases is currently based on a textual chains matching procedure. However, this approach requires an accurate annotation of images, which is not the case on the Web. To tackle this issue, we propose a reformulation method that reduces the influence of noisy image annotations. We extract a ranked list of related concepts for terms in the query from WordNet and Wikipedia, and use them to expand the initial query. Then some visual concepts are used to re-rank the results for queries containing, explicitly or implicitly, visual cues. First evaluations on a diversified corpus of 150000 images were convincing since the proposed system was ranked 4th and 2nd at the WikipediaMM task of the ImageCLEF 2008 campaign [1].