0% found this document useful (0 votes)
81 views

A Survey of Mesh Compression Techniques: Eric Lorimer

This document summarizes research in mesh compression techniques. It first discusses using predictive coding to compress mesh connectivity by encoding the direction to extend triangle strips. It then describes EdgeBreaker, a face-based algorithm that traverses mesh faces to generate a spanning tree and encodes symbols to represent the traversal history. EdgeBreaker can compress connectivity to near optimal rates of around 2 bits per vertex. The document also reviews compressing mesh geometry using quantization followed by predictive encoding of vertex positions based on neighboring vertices. Both lossless and lossy geometry compression methods are discussed.

Uploaded by

sanjaywasan
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views

A Survey of Mesh Compression Techniques: Eric Lorimer

This document summarizes research in mesh compression techniques. It first discusses using predictive coding to compress mesh connectivity by encoding the direction to extend triangle strips. It then describes EdgeBreaker, a face-based algorithm that traverses mesh faces to generate a spanning tree and encodes symbols to represent the traversal history. EdgeBreaker can compress connectivity to near optimal rates of around 2 bits per vertex. The document also reviews compressing mesh geometry using quantization followed by predictive encoding of vertex positions based on neighboring vertices. Both lossless and lossy geometry compression methods are discussed.

Uploaded by

sanjaywasan
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

A Survey of Mesh Compression Techniques


Eric Lorimer
University of Illinois, Urbana-Champaign

ABSTRACT
As the field of computer graphics advances, computers and
graphics hardware become more powerful. However, the size
and complexity of the meshes we wish to represent are also
increasing to the point where we must find some way to com-
press this information in order to make storage and trans-
mission feasible (or more feasible). This paper will survey
the research in mesh compression, provide some background,
and look at future directions in the field. Figure 1: EdgeBreaker CLERS symbols

1. INTRODUCTION mesh geometry and the mesh connectivity separately.


In many areas of computer graphics, there arises a need
to work with very large meshes. The Digital Michelangelo
2. COMPRESSING MESH CONNECTIVITY
project, for example, has scanned the David statue at a reso-
One of the earliest attempts to encode mesh connectivity
lution of .29mm which requires 32GB of data to store. Aside
was made by Deering [1]. In this paper he builds on the
from the problems of manipulating such a large data set on
idea of a generalized triangle strip in which the direction to
machines with a limited amount of main memory (usually
extend the chain when appending a new vertex is encoded
much lower than 32GB), techniques for compressing meshes
as well. Deering introduces a fixed-length (16) queue called
such as these are becoming increasingly important in order
the mesh buffer [1]. Vertices must be explicitly pushed and
to make storage and transmission more feasible. In addition,
later referenced from this queue. This avoids the problem
there remains the elusive possibility of streaming 3D con-
with generalized triangle strips requiring frequent restarts or
tent over the Internet in real-time. Many games (e.g. role-
duplicating already decoded vertices. Deering defines four
playing games) experience bottlenecks in bandwidth due in
operations for his mesh buffer - replace oldest, replace mid-
large part to transmission of game data including mesh data.
dle, restart clockwise and restart counterclockwise. In ad-
dition, Deering preserves the advantage of triangle strips in
Thus, we see the need for compression at two ends of the
that they can be decoded by a single linear scan of the data.
spectrum. One side seeks to compress relatively small meshes
Deering’s method, after cutting the mesh into strips, has no
further to reduce transmission time over slow Internet links.
way to stitch it back together.
The other side seeks to compress very large meshes in order
to make storage and transmission more feasible.
EdgeBreaker [10] is a more sophisticated connectivity com-
pression algorithm. EdgeBreaker is a face-based compres-
A mesh can be simply defined as the set of vertices, edges,
sion scheme which traverses the faces (triangles) of the mesh
faces together with their incidence relationships. We can
generating a spanning tree. The algorithm encodes one of
consider the incidence relationships (i.e. what faces are in-
five symbols at each face to keep the history so that the pro-
cident on a vertex, what edges are incident on a face, etc...)
cess can be reversed during decoding. The five symbols form
the mesh connectivity and the vertex positions the mesh ge-
the clers string and are defined as follows. A C is encoded
ometry. Most mesh compression techniques have treated the
when the vertex has not been visited. A L is encoded when
[email protected] the left triangle has been visited, a R is encoded when the
right triangle has been visited, an E is encoded when both
left and right triangles have been visited and S is encoded
when neither left or right triangles have been visited. (See
figure 1) In the S case, EdgeBreaker recurses on the right
subtree and then the left. EdgeBreaker can compress the
connectivity of the mesh to near optimal rates, normally
around 2 bits/vertex.

A more elegant formulation of the algorithm is described by


Rossignac [7]. This implementation makes use of a corner
table data structure which makes use of two tables to store
the vertices and their opposite corners. This allows creation
of functions such as ”next corner around triangle” and ”pre-
vious corner around triangle” easily. The entire algorithm
can be succinctly expressed in only a page of pseudocode.

Although EdgeBreaker is a simple, efficient method for com-


pressing the connectivity of a mesh, it has some limitations.
First, EdgeBreaker as originally expressed is limited to tri-
angle meshes. In practice, as many meshes are triangulated,
this may not seem a large problem, but nevertheless it limits
the areas in which it can be applied. Second, EdgeBreaker
requires random access to the vertices. This is inconvenient
for gigantic meshes for out-of-core processing.

Isenburg et al. [6] deal with the limitation of triangulated Figure 2: Touma-Gotsman parallelogram predictor
meshes and extend EdgeBreaker to deal with arbitrary poly-
gon meshes. Gumhold and Strasser [3] use a similar idea three [13]. This works well, but fails to account for the
as EdgeBreaker in their Cut-Border-Machine, but have the curvature in the mesh and cannot predict the crease angle
advantage of single-pass encoding and decoding making it between adjacent triangles. [Fig. 1]
more useful for out-of-core processing applications.
The predicted vertex is then compared to the actual vertex
3. COMPRESSING MESH GEOMETRY position and this difference encoded into the output stream.
3.1 Lossless Methods When the predicted vertex position and actual vertex posi-
Mesh geometry refers to the set of vertex positions of the tion are close, the difference can be stored in fewer bits than
graph. The earliest and still most popular method involves would be required to store the actual (quantized) position.
a two-stage process of quantization and predictive encoding. There are various ways to encode this difference.
Quantization reduces the range of the data. In the context
of geometry compression, quantization takes the three com- 3.2 Lossy Methods
ponents, (x, y, z), of each vertex and stores them in a fixed Because the spatial prediction rule is fixed by the algorithm,
number of bits (typically 10-14 is sufficient). The quantized the only way to gain further compression improvements is in
mesh at 10-14 bits is visually indistinguishable from the orig- the quantization stage in the lossless algorithm. However,
inal (usually 32 bits). Therefore, this quantization can be as noted, agressive quantization leads to distinct artifacts
considered ”lossless.” and can no longer be considered lossy.

Further quantization, however, introduces very noticeable Karni and Gotsman [8] propose to use spectral analysis to
high-frequency noise [12]. Sorkine et al [12] show how this take advantage of the information present in the connectiv-
high-frequency distortion can be converted to low-frequency, ity to aid in the geometry compression. Building on the
large-scale geometry distortion which is less noticeable to signal processing framework introduced by Taubin [11] for
the human visual system (though still quite significant) by surface fairing, they decompose the Laplacian matrix into
applying the Laplacian to the vertices before quantizing to its orthogonal eigenvectors and project the geometry signal
generate ”δ coordinates” which are then quantized and the (each x, y, z component separately) onto this basis.
process is reversed on the decoding side. This produces
much more aesthetically pleasing results, but it is debatable The Laplacian matrix is a sparse n x n matrix (n is the
whether distorting the low-frequency components is more or number of vertices) defined as follows:
less acceptable than high-frequency artifacts. 8
>
<−1 if i = j,
The second part of these lossless compression algorithms Lij = 1/di if i and j are neighbors,
involves some form of spatial prediction. Based on the fact >
:0 otherwise.
that the decoding process usually loosely orders the vertices
by position and that within a local region vertex positions where di is the degree (or valence) of vertex i.
are highly correlated, spatial prediction attempts to ”guess”
the location of the next vertex given the already decoded They succeed in demonstrating that by discarding the ”high-
vertices. Linear predictive coding is a simple scheme which frequency” (eigenvectors with large corresponding eigenval-
uses a linear combination of a small number of previous ues) components they can compress the mesh with less dis-
vertices to predict the next vertex. The most simple linear tortion than the lossless algorithm using the parallelogram
rule is to predict the vertex to be the same as immediately predictor.
preceding vertex. This leads to a simple delta-encoding.
However, computing the eigenvectors of the sparse Laplacian
The most widely used prediction rule, the ”parallelogram is prohibitively expensive for anything larger than trivial
predictor,” is based on the observation that adjacent tri- meshes (approximately 600 vertices). In addition, numeri-
angles tend to form parallelograms, therefore it predicts cal stability becomes and issue when computing eigenvectors
the next vertex to form a parallelogram with the previous of very large matrices. Their algorithm relies on mesh parti-
tioning to make the problem feasible [8], but this is far from Wavelet compression techniques have proven very successful
ideal. in the area of 2D image compression. The recent JPEG2000
standard and the MPEG4 still image coder both use wavelets
More recently, Karni and Gotsman [9] suggest using fixed to compress images. So it is not surprising that recent re-
spectral basis vectors computed from a 6-regular triangle search in mesh compression has also tried to exploit wavelets
mesh. This is convenient as it allows the encoder and de- for compression. The fundamental difference between im-
coder to use the FFT to compute the basis vectors. The ages and meshes, however, is that images are sampled on
problem then reduces to mapping vertices in the ”candi- a regular 2D grid whereas meshes, in general, have very ir-
date” mesh (the mesh they wish to compress) into vertices regular connectivity and sampling. Thus, much of the focus
in the ”host” mesh. Where the number of vertices, and in in wavelet mesh compression involves remeshing techniques
particular, the number of boundary vertices is not the same, typically based on subdivision schemes to produce a semi-
they augment the candidate mesh by placing new vertices in regular meshing to exploit the wavelet transform.
such a way as to normalize the degree of the vertices in the
mesh. This augmentation is just a special case of remeshing. During the remeshing, a sequence of approximations at dif-
They show very satisfactory results - a small loss in qual- ferent resolutions is generated. The wavelet transform con-
ity for the potential to encode and decode the mesh very verts this sequence into a base mesh and a sequence of co-
efficiently. efficients which can be efficiently encoded.

In principle, spectral analysis compression methods can be An outstanding problem in all lossy geometry compression
used for progressive transmission, but in practice little work schemes is the choice of an appropriate visual metric to
has been done in this area likely due to the fact that the guide both the simplification as well as for measuring rate-
encoding and decoding time tend to dominate the algorithm distortion in order to evaluate and compare methods. Karni
and transmission time is not usually a bottleneck. and Gotsman [8] propose a visual metric which is the aver-
age of the geometric distance between the models and the
Spectral methods work best on smooth meshes where most distance between the Laplacian (normals). Sorkine et al.,
of the energy is concentrated in the low frequencies. Meshes trying to show that the normal distortion is more impor-
generated from CAD models, for instance, with sharp edges tant than geometric distortion, naturally argue that the ra-
which must be preserved are not well suited to spectral com- tio should significantly favor preserving normal distortion.
pression methods. In any case, finding a suitable visual metric for lossy com-
pression methods is still very much an open problem.
4. MULTIRESOLUTION AND PROGRES-
SIVE COMPRESSION METHODS 5. CONCLUSION
Mesh compression has come a long way driven by the de-
4.1 Progressive Meshes sire to represent more and more detailed objects (like the
Another significantly different approach to compressing meshes David statue). This trend is likely to continue and as long as
than the standard face-based connectivity coding and geom- the complexity of the models grows faster than storage and
etry compression recognizes that simplification can be con- transmission improvements, mesh compression will remain
sidered a form of compression, albeit a very lossy one. a topic of research. It is also clear that lossless mesh connec-
tivity compression has nearly reached the optimal limit and
There are three components of progressive simplification not much more work can be done to improve connectivity
compression methods. First, the choice of the simplifica- compression. However, compressing mesh geometry is still a
tion operator. Second, the choice of a metric to determine difficult problem with no clear-cut solution. Novel methods
which mesh element to remove. Third, an efficient coding like spectral analysis and wavelet transforms could provide
of the information to reconstruct the mesh. new insights and directions for future research. When the
restriction of losslessness is lifted, even more techniques be-
By defining an invertible simplification operator and record- come possible. An important issue to be resolved before
ing the steps during simplification, simplification can be lossy compression methods can be fully evaluated is the def-
used for lossless compression as well as lossy compression. inition of a suitable visual error metric.
Edge collapse simplification schemes such as Hoppe’s Pro-
gressive Meshes [4] are well suited for this. The inverse 6. REFERENCES
of an edge collapse is a vertex split which inserts a ver- [1] M. Deering. Geometry compression. In Proceedings of
tex into the mesh next to an existing vertex and morphs it the 22nd annual conference on Computer graphics and
into place creating a new edge. Hoppe chooses the edge to interactive techniques, pages 13–20. ACM Press, 1995.
collapse based on a complex energy minimization problem
which yields high quality results but simpler edge collapse [2] M. Garland and P. S. Heckbert. Surface simplification
simplification methods exist (e.g. quadric error metric [2]) using quadric error metrics. Computer Graphics,
which might be more suitable for real time progressive com- 31(Annual Conference Series):209–216, 1997.
pression.
[3] S. Gumhold and W. Strasser. Real time compression
Hoppe [5] shows that Progressive Meshes can be used to of triangle mesh connectivity. Computer Graphics,
compress meshes to approximately 6 bits/vertex. 32(Annual Conference Series):133–140, 1998.
[4] H. Hoppe. Progressive meshes. Computer Graphics,
4.2 Wavelets 30(Annual Conference Series):99–108, 1996.
[5] H. Hoppe. Efficient implementation of progressive
meshes. Computers and Graphics, 22(1):27–36, 1998.
[6] M. Isenburg and J. Snoeylink. Face fixer: Compressing
polygon meshes with properties. In K. Akeley, editor,
SIGGRAPH 2000, Computer Graphics Proceedings,
pages 263–270. ACM Press / ACM SIGGRAPH /
Addison Wesley Longman, 2000.
[7] A. S. Jarek Rossignac and A. Szymczak. 3d
compression made simple: Edgebreaker on a corner
table. In Proceedings of Shape Modeling International
Conference, Genoa, Italy, 2001.
[8] Z. Karni and C. Gotsman. Spectral compression of
mesh geometry. In K. Akeley, editor, SIGGRAPH
2000, Computer Graphics Proceedings, pages 279–286.
ACM Press / ACM SIGGRAPH / Addison Wesley
Longman, 2000.
[9] Z. Karni and C. Gotsman. 3d mesh compression using
fixed spectral bases. June 2001.
[10] J. Rossignac. EdgeBreaker: Connectivity compression
for triangle meshes. IEEE Transactions on
Visualization and Computer Graphics, 5(1):47–61,
/1999.
[11] G. Taubin. A signal processing approach to fair
surface design. Computer Graphics, 29(Annual
Conference Series):351–358, 1995.
[12] C. Touma and C. Gotsman. High-pass quantization
for mesh encoding. Graphics Interface ’98 Conference
Proceedings, pages 26–34, 1998.
[13] C. Touma and C. Gotsman. Triangle mesh
compression. Graphics Interface ’98 Conference
Proceedings, pages 26–34, 1998.

You might also like