Charting past, present, and future research in ubiquitous computing
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction in the new millennium, Part 1
A vector space model for automatic indexing
Communications of the ACM
Geographical Information Retrieval with Ontologies of Place
COSIT 2001 Proceedings of the International Conference on Spatial Information Theory: Foundations of Geographic Information Science
Context data in geo-referenced digital photo collections
Proceedings of the 12th annual ACM international conference on Multimedia
Automating Photo Annotation using Services and Ontologies
MDM '06 Proceedings of the 7th International Conference on Mobile Data Management
An Adaptation of the Vector-Space Model for Ontology-Based Information Retrieval
IEEE Transactions on Knowledge and Data Engineering
An empirical investigation of user term feedback in text-based targeted image search
ACM Transactions on Information Systems (TOIS)
A survey of content-based image retrieval with high-level semantics
Pattern Recognition
IGroup: presenting web image search results in semantic clusters
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Why we tag: motivations for annotation in mobile and online media
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Using text search for personal photo collections with the MediAssist system
Proceedings of the 2007 ACM symposium on Applied computing
Journal of Systems and Software
PhotoMap - automatic spatiotemporal annotation for mobile photos
W2GIS'07 Proceedings of the 7th international conference on Web and wireless geographical information systems
MediAssist: using content-based analysis and context to manage personal photo collections
CIVR'06 Proceedings of the 5th international conference on Image and Video Retrieval
Hi-index | 0.00 |
This paper presents an approach for incorporating contextual metadata in a keyword-based photo retrieval process. We use our mobile annotation system PhotoMap in order to create metadata describing the photo shoot context (e.g., street address, nearby objects, season, lighting, nearby people...). These metadata are then used to generate a set of stamped words for indexing each photo. We adapt the Vector Space Model (VSM) in order to transform these shoot context words into document-vector terms. Furthermore, spatial reasoning is used for inferring new potential indexing terms. We define methods for weighting those terms and for handling a query matching. We also detail retrieval experiments carried out by using PhotoMap and Flickr geotagged photos. We illustrate the advantages of using Wikipedia georeferenced objects for indexing photos.