SIGGRAPH '92 Proceedings of the 19th annual conference on Computer graphics and interactive techniques
SIGGRAPH '93 Proceedings of the 20th annual conference on Computer graphics and interactive techniques
Automatic extraction of Irregular Network digital terrain models
SIGGRAPH '79 Proceedings of the 6th annual conference on Computer graphics and interactive techniques
Comparison of resource platform selection approaches for scientific workflows
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
GPU-based roofs' solar potential estimation using LiDAR data
Computers & Geosciences
Big 3D spatial data processing using cloud computing environment
Proceedings of the 1st ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data
Hi-index | 0.00 |
Airborne light detection and ranging (LiDAR) topographic data provide highly accurate representations of the earth's surface. However, large data volumes pose computing issues when disseminating and processing the data. The main goals of this paper are to evaluate a vertex decimation algorithm used to reduce the size of the LiDAR data and to test parallel computation frameworks, particularly multicore CPU and GPU, in processing the data. In this paper we use a vertex decimation technique to reduce the number of vertices available in a triangulated irregular network (TIN) representation of LiDAR data. In order to validate and verify the algorithm, the authors have used last returns only (LRO) and all returns (AR) of points from four tiles of LiDAR data taken from flat and undulating terrains. The results for flat terrain data showed decimation rates of roughly 95% for last returns only and 55% for all returns without significant loss of accuracy in terrain representation. Accordingly, file sizes were reduced by about 96.5% and 60.5%. The processing speed greatly benefited from parallel programming using the multicore CPU framework. The GPU usage demonstrated an additional impediment caused by noncomputational overhead. Nonetheless, tremendous acceleration was demonstrated by the GPU environment in the computational part alone.