IRM: integrated region matching for image retrieval
MULTIMEDIA '00 Proceedings of the eighth ACM international conference on Multimedia
Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Region-Based Fuzzy Feature Matching Approach to Content-Based Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
CAIVL '97 Proceedings of the 1997 Workshop on Content-Based Access of Image and Video Libraries (CBAIVL '97)
NeTra: a toolbox for navigating large image databases
ICIP '97 Proceedings of the 1997 International Conference on Image Processing (ICIP '97) 3-Volume Set-Volume 1 - Volume 1
Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Constraint Based Region Matching for Image Retrieval
International Journal of Computer Vision - Special Issue on Content-Based Image Retrieval
Hidden semantic concept discovery in region based image retrieval
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
IEEE Transactions on Image Processing
Relevance feedback: a power tool for interactive content-based image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
In spite of inaccurate segmentation, the performance of region-based image retrieval is still restricted by the diverse appearances of semantic-similar objects. On the contrary, humans' linguistic description of image objects can reveal object information at a higher level. Using partial annotated region collection as "visual dictionary", this paper proposes a semantic sensitive region retrieval framework using middle-level visual & textual object description. To achieve this goal, firstly, a partial image database is segmented into regions, which are manually annotated by keywords to construct a visual dictionary. Secondly, to associate appearance-diverse, semantic-similar objects together, a Bayesian reasoning approach is adopted to calculate the semantic similarity between two regions. This inference method utilizes the visual dictionary to bridge un-annotated images region together at semantic level. Based on this reasoning framework, both query-by-example and query-by-keyword user interfaces are provided to facilitate user query. Experimental comparisons of our method over Visual-only region matching method indicate its effectiveness in enhancing the performance of region retrieval and bridging the semantic gap.