Local Grayvalue Invariants for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
Automatic image annotation and retrieval using cross-media relevance models
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Attention-Based Approach to Content-Based Image Retrieval
BT Technology Journal
Bayesian Mixture Hierarchies for Automatic Image Annotation
ECIR '09 Proceedings of the 31th European Conference on IR Research on Advances in Information Retrieval
Multiple Bernoulli relevance models for image and video annotation
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
CBSA: content-based soft annotation for multimodal image retrieval using Bayes point machines
IEEE Transactions on Circuits and Systems for Video Technology
Automatic image tagging using two-layered Bayesian networks and mobile data from smart phones
Proceedings of the 10th International Conference on Advances in Mobile Computing & Multimedia
A mobile picture tagging system using tree-structured layered Bayesian networks
Mobile Information Systems
Hi-index | 0.00 |
Automatic image tagging seeks to assign relevant words to images that describe the actual content found in the images without intermediate manual annotation. One common problem shared by most previous learning approaches for automatic image tagging is that the segmented regions in the image were considered as equally important and were processed equally. The goal of this paper is to develop a novel annotation approach based on regions of interest to take into account the users' real experience and fix a visual weight for each region according to the degree of interest. To do this, we firstly segmented the image into several regions. And then it calculated the degree of interest for each region according to the experiments of human visual attention and cognitive psychology. Each region will be assigned a visual weight at the third step. We can obtain the prior probability of the region given the concept. At the stage of the automatic annotation, we can calculate posterior probability with the Bayesian Theorem to get the most likely concept to tag the unseen image. The proposed methodology is examined in a well-known benchmark image collection and the results demonstrated its competitiveness.