Compression and streaming of polygon meshes

  • Authors:
  • Jack Snoeyink;Martin Isenburg

  • Affiliations:
  • The University of North Carolina at Chapel Hill;The University of North Carolina at Chapel Hill

  • Venue:
  • Compression and streaming of polygon meshes
  • Year:
  • 2004

Quantified Score

Hi-index 0.02

Visualization

Abstract

Polygon meshes provide a simple way to represent three-dimensional surfaces and are the de-facto standard for interactive visualization of geometric models. Storing large polygon meshes in standard indexed formats results in files of substantial size. Such formats allow listing vertices and polygons in any order so that not only the mesh is stored but also the particular ordering of its elements. Mesh compression rearranges vertices and polygons into an order that allows more compact coding of the incidence between vertices and predictive compression of their positions. Previous schemes were designed for triangle meshes and polygonal faces were triangulated prior to compression. I show that polygon models can be encoded more compactly by avoiding the initial triangulation step. I describe two compression schemes that achieve better compression by encoding meshes directly in their polygonal representation. I demonstrate that the same holds true for volume meshes by extending one scheme to hexahedral meshes. Nowadays scientists create polygonal meshes of incredible size. Ironically, compression schemes are not capable—at least not on common desktop PCs—to deal with giga-byte size meshes that need compression the most. I describe how to compress such meshes on a standard PC using an out-of-core approach. The compressed mesh allows streaming decompression with minimal memory requirements while providing seamless connectivity along the advancing decompression boundaries. I show that this type of mesh access allows the design of IO-efficient out-of-core mesh simplification algorithms. In contrast, the mesh access provided by today's indexed formats complicates subsequent processing because of their IO-inefficiency in de-referencing (in resolving all polygon to vertex references). These mesh formats were designed years ago and do not take into account that a mesh may not fit into main memory. When operating on large data sets that mostly reside on disk, the data access must be consistent with its layout. I extract the essence of our compressed format to design a general streaming format that provides concurrent access to coherently ordered elements while documenting their coherence. This eliminates the problem of IO-inefficient de-referencing. Furthermore, it allows to re-design mesh processing tasks to work as streaming , possibly pipelined, modules on large meshes, such as on-the-fly compression of simplified mesh output.