Automatic image annotation and retrieval using cross-media relevance models
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
PLSA-based image auto-annotation: constraining the latent space
Proceedings of the 12th annual ACM international conference on Multimedia
A database centric view of semantic image annotation and retrieval
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
A framework for understanding Latent Semantic Indexing (LSI) performance
Information Processing and Management: an International Journal - Special issue: Formal methods for information retrieval
Qualitative evaluation of automatic assignment of keywords to images
Information Processing and Management: an International Journal - Special issue: Formal methods for information retrieval
Automatic image semantic annotation based on image-keyword document model
CIVR'05 Proceedings of the 4th international conference on Image and Video Retrieval
Narrowing the semantic gap - improved text-based web document retrieval using visual features
IEEE Transactions on Multimedia
Face Image Annotation Based on Latent Semantic Space and Rules
KES '08 Proceedings of the 12th international conference on Knowledge-Based Intelligent Information and Engineering Systems, Part II
Hi-index | 0.00 |
This paper describes annotation of face images in keywords based on latent semantic indexing, and experimental results in FIARS. Two latent semantic spaces are constructed from visual and symbolic features. These features are corresponding to lengths of some places of a face and keywords. One latent semantic space is constructed from visual features, the other space is constructed from both features. The former space is used for retrieving similar face images, and the latter for seeking keywords to a given face image. Moreover, the two types of visual futures are utilized. One is specified in terms of the lengths of face parts, and the other in terms of points on the outlines of a face and its parts. As an experiment, recall and precision ratios of assigned keywords are measured using the two types of the visual features.