Spatial embedding and spatial context

  • Authors:
  • Christopher Gold

  • Affiliations:
  • Department of Computing and Mathematics, University of Glamorgan, Wales, UK and Department of Geoinformatics, Universiti Teknologi Malaysia

  • Venue:
  • QuaCon'09 Proceedings of the 1st international conference on Quality of context
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A serious issue in urban 2D remote sensing is that even if you can identify linear features it is often difficult to combine these to form the object you want - the building. The classical example is of trees overhanging walls and roofs: it is often difficult to join the linear pieces together. For robot navigation, surface interpolation, GIS polygon "topology", etc., isolated 0D or 1D elements in 2D space are incomplete: they need to be fully embedded in 2D space in order to have a usable spatial context. We embed all our 0D and 1D entities in 2D space by means of the Voronoi diagram, giving a space-filling environment where spatial adjacency queries are straightforward. This has been an extremely difficult algorithmic problem. We show recent results. If we really want to move from exterior form to building functionality we must work with volumetric entities (rooms) embedded in 3D space. We thus need an adjacency model for 3D space, allowing queries concerning adjacency, access, etc. to be handled directly from the data structure, exactly as described for 2D space. We will show our recent results to handle this problem. We claim that an appropriate adjacency model greatly simplifies questions of spatial context of elements (such as walls) that may be extracted from raw data, allowing direct assembly of compound entities such as buildings. Relationships between compound objects provide solutions to building adjacency, robot navigation and related problems. If the spatial context can be stated clearly then other contextual issues may be greatly simplified.