Depth-Peeling For Texture-Based Volume Rendering
Depth-Peeling For Texture-Based Volume Rendering
1. Introduction
This work deals with the question of how to extract and
visualize the n-th closest iso-surface to the viewer, based
on the same iso-value, for a given volumetric dataset. To
resolve the problem, we use the idea of depth-peeling and
adopt it to volumetric representations. By exploiting the
anatomy of iso-surfaces in volumes, the proposed scheme
clearly beats methods like depth-sorting, screen-door transparency and the primary depth-peeling approach, since the
result is rendered in a single pass without further context
information. By using transfer functions, this cannot be accomplished, since transfer functions influence either all or
2. Related Work
Since pioneer work [12], transparency has been used in
volume visualization for better conveying shape [8, 9, 10],
implicitly in generating point-stippling techniques [14, 15,
16] as well as in NPR for illustrating superimposed, differently shaded iso-surfaces [17]. The bubble model [2] uses a
similar previewing scheme like our model, but we are able
to distinguish structures for a fixed iso-value. Two-Level
Volume Rendering(2lVR) as described in [6] -similar to our
method- is designated to visualize the volumetric data in
a Focus+Context(F+C) manner; this method, however, requires pre-classification by segmentation or transfer functions and is not hardware-accelerated in that implementation.
Depth-peeling was realized in [18] using Virtual Pixel
Maps and Dual Depth Buffers [3]. Later on, special hardware functionality accelerated the use of depth peeling [5];
also its use in NPR and volume clipping was soon recognized [4, 21]. The common idea of this methods is depthtesting with two depth buffers and two belonging depth
tests. The first depth test can be imagined (depending on
the implementation) as the standard GL LEQUAL testing.
The second depth test, in turn, further rejects the active fragment, if it is nearer to the viewer than the depth of the pixel
rendered at the same screen position in the previous pass.
Our way of extracting the different layers is slightly varied.
3. Volumetric Depth-Peeling
Format
RGBA
1: Framebuffer copy(2D)
RGBA
RGB
Content
24 Bit
gf ,
8 Bit
24 Bit cold ,
8 Bit layer#
24 Bit clayer
v
f
1. Determine -test.
sign (1)layer#+1
2. -test.
sign < sign iso ?
Yes: Kill fragment.
No: layer layer# + 1;
3. Lighting.
l +v
f ex
gf
) + cdif f (
gf l ) + camb
cnew cspec (
l + vf
4. Coloring.
cnew cnew clayer
5. Blending.
cnew
(1
6. Layer test.
cresult
1
layer )cold
cold
1
layer cnew
: layer < 2
: layer 2
cback
layer
: layer < n
cold
layer
: layer > n
cnew
layer
: layer = n
Figure 1. Survey of the fragment program for ordinary depth-peeling. denotes component-wise multiplication, iso the iso-value, cspec , cdif f and camb specular, diffuse and ambient weights, ex the specular exponent,
cback , cold and cnew the background color, the color in the
framebuffer and the resulting color, respectively.
the content of the framebuffer in TU 1 alternatingly. The latter step answers the purpose of accessibility, since fragment
programs are currently not able to read the framebuffer directly. Initially, TU 1 is cleared with the background color
cback , annotating in the -channel that no layer was extracted so far. The interior loop (steps 1-6) describes the
action taking place on per-fragment basis.
In step 1, -depending if the last layer was odd or even- we
classify, wether currently an interior or exterior boundary
has to be extracted. Step 2 immediately rejects fragments
from further processing, if the fragment does not lie on
an iso-surface boundary. The surviving fragments are supposed to be located intuitively behind the increased layer.
Step 3 prophylactically calculates the canonical Phonglighting for a local viewer. The view direction
v
f is obtained on a per-fragment basis by fetching the texture coordinates from TU 3. The texture coordinates of TU 3 code
the view direction on a per-vertex basis; therefore, during
4. Applications in NPR
In this section we briefly explain, how the technique
above can be used to distinguish between visible and hidden silhouettes of a volumetric object, as used in polygonal
models, e.g. in [11]. For the sake of simplicity, we are going
to depict hidden silhouettes of increasing layer number with
brighter grey colors, which conforms with the expectations
of the human perception of sensing depth by atmospheric
effects, like e.g. fog [20]. Stylization features, like dashed
lines, are not focus of this work.
Our algorithm needs to be rearranged only slightly to
meet above requirements. While keeping the rendering
pipeline unchanged, the fragment program is varied according to figure 2. Steps 1 and 2 remain unchanged. Step 3 selects the silhouette color to be the layer color in TU 2, if the
view vector is nearly orthogonal to the gradient. Otherwise,
the background color is propagated to flag that a silhouette
can not be found at the current window position for the current layer. Step 4 keeps the current color in the framebuffer
unchanged, if already a silhouette has been encountered for
previous, lower layers. Step 5 assures, that only the first n
layers contribute to the final image.
5. Results
The first two rows in figure 3 show (from left to right)
the first three layers of the head and engine datasets and
cold
cnew
5. Layer test.
cresult
: |
v
f gf 1| <
: |
v
f gf 1|
clayer
cback
:
:
cold =
cback
cold = cback
cold
layer
: layer > n
cnew
layer
: layer n
Figure 2. Survey of the fragment program for extraction of visible and hidden silhouettes. We use the same
terminology as in figure 1.
6. Conclusion
We presented a novel approach for extracting inner and
outer boundaries of volumetric objects by introducing the
concept of volumetric depth-peeling. The proposed approach is able to count the number of penetrations of a ray,
casted into the scene through inner and outer iso-surfaces.
Latter aim could only be achieved for two penetrations or
for multiple rendering passes so far.
The user is able to explore the dataset in a fast and intuitive manner by only adjusting the iso-value and selecting
the desired layer. Our method requires only a single rendering pass, independently of the number of layers extracted.
We showed an application of the method in the field of NPR,
where we are able to illustrate inner and outer silhouettes of
the inspected object with colors of choice on a per-layer basis in a single pass. The proposed approach helps to understand the interior anatomy of datasets without the difficult
user requirements for managing good transfer functions.
7. Acknowledgements
This work has been funded by the German Israeli Foundation (GIF)- Grant. Special thanks to Stefan Guthe for
proofreading the paper, as well as for our reviewers for important hints for improving our work.
References
[1] F. Crow. Transparency for computer synthesized images. Computer Graphics (Proceedings of SIGGRAPH 77),
11:242248, July 1977.
[2] B. Csebfalvi and E. Groller. Interactive volume rendering
based on a bubble model. Graphics Interface, 2001.
[3] P. Diefenbach. Pipeline rendering: Interaction and realism
through hardware-based multi-passrendering. Ph.D. dissertation, 1996. University of Pennsylvania, Department of
Computer Science.
[4] J. Diepstraten, D. Weiskopf, and T. Ertl. Transparency in interactive technical illustrations. In Computer Graphics Forum, volume 21, 2002.
[5] C. Everitt. Interactive order-independent transparency.
White paper, NVidia, 1999. A Specification (Version 1.2.1).
Silicon Graphics.
[6] H. Hauser, L. Mroz, G.-I. Bischi, and M. Groller. Two-level
volume rendering- fusing mip and dvr. IEEE Visualization
2000 Proceedings, pages 211218, 2000.
[7] T. Heidmann. Real shadows, real time. In Iris Universe, No.
18, pages 2331. Silicon Graphics Inc., Nov. 1991.
[8] V. Interrante, H. Fuchs, and S. M. Pizer. Enhancing transparent skin surfaces with ridge and valley lines. In IEEE
Visualization 1995 Proceedings, pages 5259, 1995.
[9] V. Interrante, H. Fuchs, and S. M. Pizer. Illustrating transparent surfaces with curvature-directed strokes. In IEEE Visualization 1996 Proceedings, pages 211218, 1996.
[10] V. Interrante, H. Fuchs, and S. M. Pizer. Conveying the 3d
shape of smoothly curving transparent surfaces via texture.
IEEE Transactions on Visualization and Computer Graphics, 3(2):98117, 1997.
Window size
Head
Engine
2562
6.66
8.73
5122
1.88
2.80
Figure 3. Rendering results. The first two rows illustrate the depth-peeling method, applied to blend layers in equal ratios. The
blended results for the head and the engine datasets are shown on the right, respectively. The third and fourth row show the same
datasets, this time visualized with the NPR technique of section 4. The pictures on the right show the combined results, respectively.
See text for details.