Estimating poses of world's photos with geographic metadata

  • Authors:
  • Zhiping Luo;Haojie Li;Jinhui Tang;Richang Hong;Tat-Seng Chua

  • Affiliations:
  • School of Computing, National University of Singapore;School of Computing, National University of Singapore;School of Computing, National University of Singapore;School of Computing, National University of Singapore;School of Computing, National University of Singapore

  • Venue:
  • MMM'10 Proceedings of the 16th international conference on Advances in Multimedia Modeling
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Users can explore the world by viewing place related photos on Google Maps. One possible way is to take the nearby photos for viewing. However, for a given geo-location, many photos with view directions not pointing to the desired regions are returned by that world map. To address this problem, prior know the poses in terms of position and view direction of photos is a feasible solution. We can let the system return only nearby photos with view direction pointing to the target place, to facilitate the exploration of the place for users. Photo's view direction can be easily obtained if the extrinsic parameters of its corresponding camera are well estimated. Unfortunately, directly employing conventional methods for that is unfeasible since photos fallen into a range of certain radius centered at a place are observed be largely diverse in both content and view. Int this paper, we present a novel method to estimate the view directions of world's photos well. Then further obtain the pose referenced on Google Maps using the geographic Metadata of photos. The key point of our method is first generating a set of subsets when facing a large number of photos nearby a place, then reconstructing the scenes expressed by those subsets using normalized 8-point algorithm. We embed a search based strategy with scene alignment to product those subsets. We evaluate our method by user study on an online application developed by us, and the results show the effectiveness of our method.