Synthesizing high fidelity 3D landscapes from GIS data

  • Authors:
  • Pedro Maroun Eid;Sudhir Mudur

  • Affiliations:
  • Concordia University, Montreal;Concordia University, Montreal

  • Venue:
  • Proceedings of the 1st International Conference and Exhibition on Computing for Geospatial Research & Application
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Military, simulation and gaming applications are increasingly using digitally synthesized visuals of real world landscapes. Such applications require high fidelity digital 3D representations of landscapes to be generated in low turn-around time after acquiring the necessary initial data. Geospatial or GIS databases, which are a primary resource for the initial data include three main components, namely, elevation data, imagery and feature data. While, the first two are easily available, feature data (also known as vector data) and sometimes the associated 3D models are not. This paper presents the progress achieved in developing a semantics driven system that addresses the problem of generating high fidelity 3D landscapes. For instance, given initial geographical source data layers consisting of elevations, road surface features and imagery, many techniques would only render road texture over steep terrain. Whereas, a human would immediately distinguish this as improbable by collectively looking at the data layers and note a missing element, an overpass or tunnel. Our system uses deductive reasoning, through Description Logic reasoners, in conjunction with specialized perelement spatial tests and applies it to the GIS data to extract, identify and classify individual spatial elements along with values for their properties needed for 3D rendering. Semantic Web technology inherently supports the analysis on collective data by separating formal knowledge definition from actual data and abstracting actual instance data handling.