Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Inverted files for text search engines
ACM Computing Surveys (CSUR)
World-scale mining of objects and events from community photo collections
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Location sensitive indexing for image-based advertising
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Personalized travel recommendation by mining people attributes from community-contributed photos
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Augmenting mobile city-view image retrieval with context-rich user-contributed photos
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Proceedings of the 4th ACM Multimedia Systems Conference
Hi-index | 0.00 |
Recently, more and more types of sensors are being equipped on the smart phones, which provide different aspects into conside-ration. When a user takes a photo, the information it provides like the image content, the location and even the direction the user faces can help us to understand the photo itself. Each factor mentioned above can be treated as an input to the image search system. However, most existing algorithms for image retrieval (or annotation) only focus on the content and location information of the images yet completely ignore the important direction-facing factor and lack of the insights of the capabilities for the sensors. In this paper, we propose a novel ranking algorithm that can leverage different sensors with traditional content-based image retrieval system, and further apply to annotate images. We evaluate different combinations of sensors and investigate how the geolocation, image content and compass direction influence on image retrieval.