International Journal of Computer Vision
WordNet: a lexical database for English
Communications of the ACM
A novel relevance feedback technique in image retrieval
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 2)
A unified framework for semantics and feature based relevance feedback in image retrieval systems
MULTIMEDIA '00 Proceedings of the eighth ACM international conference on Multimedia
Introduction to Modern Information Retrieval
Introduction to Modern Information Retrieval
MindReader: Querying Databases Through Multiple Examples
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Relevance Feedback and Category Search in Image Databases
ICMCS '99 Proceedings of the IEEE International Conference on Multimedia Computing and Systems - Volume 2
IEEE Transactions on Image Processing
Relevance feedback: a power tool for interactive content-based image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Query feedback for interactive image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Illumination invariant medical image retrieval using relative vector
AIC'06 Proceedings of the 6th WSEAS International Conference on Applied Informatics and Communications
Using ontological chain to resolve the translation ambiguity of cross-language information retrieval
TELE-INFO'06 Proceedings of the 5th WSEAS international conference on Telecommunications and informatics
Hi-index | 0.00 |
This paper concentrates on the user-centered search task at ImageCLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems – T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind.