Automatically and accurately conflating orthoimagery and street maps

  • Authors:
  • Ching-Chien Chen;Craig A. Knoblock;Cyrus Shahabi;Yao-Yi Chiang;Snehal Thakkar

  • Affiliations:
  • University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA

  • Venue:
  • Proceedings of the 12th annual ACM international workshop on Geographic information systems
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent growth of the geospatial information on the web has made it possible to easily access various maps and orthoimagery. By integrating these maps and imagery, we can create intelligent images that combine the visual appeal and accuracy of imagery with the detailed attribution information often contained in diverse maps. However, accurately integrating maps and imagery from different data sources remains a challenging task. This is because spatial data obtained from various data sources may have different projections and different accuracy levels. Most of the existing algorithms only deal with vector to vector spatial data integration or require human intervention to accomplish imagery to map conflation. In this paper, we describe an information integration approach that utilizes common vector datasets as "glue" to automatically conflate imagery with street maps. We present efficient techniques to automatically extract road intersections from imagery and maps as control points. We also describe a specialized point pattern matching algorithm to align the two point sets and conflation techniques to align the imagery with maps. We show that these automatic conflation techniques can automatically and accurately align maps with images of the same area. In particular, using the approach described in this paper, our system automatically aligns a set of TIGER maps for an area in El Segundo, CA to the corresponding orthoimagery with an average error of 8.35 meters per pixel. This is a significant improvement considering that simply combining the TIGER maps with the corresponding imagery based on geographic coordinates provided by the sources results in error of 27 meters per pixel.