Generating summaries and visualization for large collections of geo-referenced photographs

  • Authors:
  • Alexandar Jaffe;Mor Naaman;Tamir Tassa;Marc Davis

  • Affiliations:
  • Yahoo! Research Berkeley, Berkeley, CA;Yahoo! Research Berkeley, Berkeley, CA;The Open University of Israel, Ra'anana, Israel;Yahoo! Inc., Sunnyvale, CA

  • Venue:
  • MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a framework for automatically selecting a summary set of photos from a large collection of geo-referenced photographs. Such large collections are inherently difficult to browse, and become excessively so as they grow in size, making summaries an important tool in rendering these collections accessible. Our summary algorithm is based on spa-tial patterns in photo sets, as well as textual-topical patterns and user (photographer) identity cues. The algorithm can be expanded to support social, temporal, and other factors. The summary can thus be biased by the content of the query, the user making the query, and the context in which the query is made.A modified version of our summarization algorithm serves as a basis for a new map-based visualization of large collections of geo-referenced photos, called Tag Maps. Tag Maps visualize the data by placing highly representative textual tags on relevant map locations in the viewed region, effectively providing a sense of the important concepts embodied in the collection.An initial evaluation of our implementation on a set of geo-referenced photos shows that our algorithm and visualization perform well, producing summaries and views that are highly rated by users.