0% found this document useful (0 votes)
33 views8 pages

Interactive Volumetric Lighting Simulating Scattering and Shadowing

Volume rendering (research)
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views8 pages

Interactive Volumetric Lighting Simulating Scattering and Shadowing

Volume rendering (research)
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Interactive Volumetric Lighting Simulating Scattering and Shadowing

Timo Ropinski∗ Christian Döring † Christof Rezk-Salama‡


University of Münster European Institue of Molecular Imaging (EIMI) Mediadesign University of
Applied Sciences,
Düsseldorf

Figure 1: A CT scan of a mouse (360 × 270 × 550 voxels) rendered with the proposed illumination model. The closeups on the right show, that
semi-transparent, diffuse as well as specular materials are captured realistically.

A BSTRACT cases is rather simple, i. e., gradient-based local Phong illumination,


In this paper we present a volumetric lighting model, which sim- which comes with two major drawbacks. First, since the Phong il-
ulates scattering as well as shadowing in order to generate high lumination model has been originally developed for surface light-
quality volume renderings. By approximating light transport in in- ing, it relies on a well-defined gradient as substitute for the surface
homogeneous participating media, we are able to come up with an normal when applied to volume data. Second, except for the vox-
efficient GPU implementation, in order to achieve the desired ef- els used for the gradient estimation, neighborhood information is
fects at interactive frame rates. Moreover, in many cases the frame not taken into account. This restricts the model to the simulation
rates are even higher as those achieved with conventional gradient- of local illumination phenomena, although advanced illumination
based shading. To evaluate the impact of the proposed illumina- models would significantly improve spatial comprehension [12].
tion model on the spatial comprehension of volumetric objects, we While normalized gradients are usually a good approximation
have conducted a user study, in which the participants had to per- for the normal vectors of isosurfaces in CT data, modalities with a
form depth perception tasks. The results of this study show, that significantly lower signal-to-noise ratio will lead to severe degrada-
depth perception is significantly improved when comparing our il- tion of the gradient quality. This is not only the case for more re-
lumination model to conventional gradient-based volume shading. cently emerging modalities as 3D ultrasound (US) or positron emis-
Additionally, since our volumetric illumination model is not based sion tomography, but also for very well established modalities such
on gradient calculation, it is also less sensitive to noise and there- as magnet resonance tomography (MRT). While gradient filtering is
fore also applicable to imaging modalities incorporating a higher frequently applied when shading such modalities, its application is
degree of noise, as for instance magnet resonance tomography or limited due to the a loss of information and the performance penalty
3D ultrasound. induced by on the fly gradient estimation. It is thus desirable to de-
velop a volumetric illumination model providing good perceptual
Index Terms: I.3.7 [Computer Graphics]: Three-Dimensional qualities without relying on gradient computation.
Graphics and Realism—Color, shading, shadowing, and texture. A gradient vector can be thought of as a very rough approxima-
tion of the vicinity of a voxel, where only 6 to 26 direct neighbors
1 I NTRODUCTION are taken into account. Due to this small number of contributing
Today, volume rendering can be performed in real-time on con- voxels, noise has a rather strong impact. Incorporating a larger
sumer graphics hardware, thanks to a variety of acceleration tech- neighborhood around the current voxel would improve the estimate.
niques developed in the past. In practice, the shading in most In this sense, the extreme case would be to consider the entire data
set, which would pave the way for the integration of global illu-
∗ e-mail: [email protected] mination effects. The approach of incorporating a larger neighbor-
† e-mail:[email protected] hood into the shading computation is not new and has been previ-
‡ e-mail:[email protected] ously exploited by other shading models [11, 20, 23]. In this paper,
we follow this idea and present a physically plausible volumetric
illumination model with a flexibility which meets the perceptual
requirements of the user. Figure 1 shows the application of the pro-
posed volumetric illumination model to a CT scan of a mouse. As
it can be seen, semi-transparent as well as diffuse and specular ma- Since ambient occlusion techniques simulate the appearance of soft
terials are rendered realistically. shadows assuming ambient light only, they cannot account for ex-
The contributions of this paper are as follows. In order to gen- ternal light sources.
erate a realistic representation of participating media for the human To incorporate external light sources, different approaches have
observer, the integration of spatially varying scattering effects is been proposed. A texture-slicing techniques which allows for shad-
supported. Since interactive high-quality volume rendering is de- ows caused by attenuation of one distant light source has been pro-
sired, we aim at an integration of these effects into GPU-based vol- posed by Behrens and Ratering [2], who superimpose a separate
ume ray-casting. A perceptional problem of scattering effects for shadow volume on top of the scalar data set. While our approach
visualization is the vanishing of hard shadow borders, which are is similar in spirit, it also supports scattering and can be applied to
important for the spatial comprehension of a scene [26]. In our point light sources without posing constraints on the light position.
model the shadow computation is decoupled from the scattering, This allows an interactive changing of the light position, for in-
which allows substantially harder shadow borders. Many previous stance in order to achieve a top-left lighting with supports resolving
solutions to volumetric scattering lead to unrealistic appearance of convex-concave ambiguities and is therefore a preference in illus-
surfaces within the volume data. Bone structures in CT often look trations [3, 25]. Zhang et al. [27, 28] have proposed methods to
like marble when scattering is applied. Our model allows us to incorporate shadows into sheet-based splatting techniques, which
easily control the amount of surface-like reflection and volumet- are less efficient compared to texture slicing or GPU-based vol-
ric scattering. While the approximative nature of our model leads ume ray-casting. Desgranges et al. [4] propose a gradient-free tech-
to physically plausible results, the solution to the light transport nique for shading and shadowing which produces visually pleasing
equation cannot be considered as physically correct. Therefore, we shadowing effects. In the strict sense, however, it is accurate only
have conducted a user study to assess the perceptual qualities of for a head light, but in this case no shadows would be visible at
our volumetric illumination model. The obtained results indicate, all. For GPU-based volume raycasting, Ropinski et al. [19] give an
that our model significantly improves depth perception, i. e., lead- overview of different possibilities to incorporate shadows from ex-
ing to faster and less error prone depth decisions when compared ternal light sources. The most prominent implementation is shown
to gradient-based Phong illumination. On top of that, our perfor- by Hadwiger et al. [6] who demonstrate a GPU-based implementa-
mance measurements show that in many cases even an increase in tion of deep shadow maps with real-time capabilities.
performance can be achieved compared to gradient-based lighting. In comparison to shadowing algorithms, much less work has
To the best of our knowledge, the developed illumination model been done in order to support scattering. Dobashi et al. [5] have
is the first one, which meets all the described requirements at the proposed a hardware-accelerated rendering technique to account for
same time while preserving full interactivity and providing good atmospheric scattering effects in homogeneous participating me-
perceptual qualities. dia. To account for scattering of light in volume data, Kniss et
In the remainder of this paper, we first discuss related work in al. [10, 11] have proposed a half-angle slicing technique which al-
Section 2, before deriving the interactive volumetric illumination lows textured slices to be rendered from both light and viewing di-
model supporting scattering as well as shadowing in Section 3. The rection simultaneously. They account for scattering effects in slice-
technical realization allowing interactive frame rates is described in based volume rendering by sampling the incident light from multi-
the following two sections, whereby Section 4 explains the lighting ple directions while updating the light’s attenuation map. Similar to
computation and Section 5 includes details about the actual ren- our implementation, scattering is restricted to a user-specified cone
dering. Section 6 demonstrates some applications of the developed which represents a special type of phase function. We emphasize
illumination model and discusses the results of the conducted user the importance of this publiaction, however, while the results show
study, before the paper concludes in Section 7. convincing light transport effects, their implementation is inconsis-
tent in the strict sense: It accounts for light scattered around the
2 R ELATED W ORK dominant direction of incident light, but neglects scattering of out-
going light on its way towards the eye, which can be incorporated
In his seminal paper on optical models for volume rendering, in our model. Furthermore, the technique can be only applied to
Max [16] emphasizes the importance of global illumination effects. slice-based volume rendering. Rezk-Salama [22] has proposed a
Accurate solutions of the physical equations of light scattering in fast Monte-Carlo-based algorithm for volumetric rendering, how-
participating media, however, are still too expensive for real-time ever, with scattering events restricted to a limited set of isosurfaces.
applications. To these ends we propose a flexible optical model which in-
Originally developed for surface lighting, ambient occlusion [29] corporates scattering, shadowing and specular reflections, and we
and obscurance shading [17] have become popular in volume ren- demonstrate its efficient implementation based on GPU-based vol-
dering. These techniques have been applied successfully to en- ume ray-casting. Our technique is flexible in the sense that it may
hance the perception of spatial relationships for isosurface render- be used to model scattering more physically accurate compared to
ing [1, 24, 18]. Several techniques exist to calculate ambient occlu- previous work, but also allows us to ease the restrictions of physical
sion for direct volume rendering. Hernell et al. [7, 8] and Ljung et laws by decoupling shadow calculation from scattering.
al. [14] propose local ambient occlusion as a fast way to calculate
soft shadows for volumetric data with varying transfer functions. 3 VOLUMETRIC I LLUMINATION M ODEL
Ropinski et al. [20] calculate dynamic updates for both ambient oc- Scattering and shadowing have a great influence on the quality of
clusion and scattering effects by utilizing local histograms. Schott volume rendered images. The radiance Ls (~x, ω~ o ) which is scattered
et al. [23] propose a different, but also inexpensive implementation ~ o can be defined as:
from position~x inside the volume in direction ω
which restricts ambient occlusion computation to a user-specified
cone angle by assuming the light position coincides with the cam- ~ o ) = s(~x, ω
Ls (~x, ω ~ o ) · Li (~x, ω
~ i, ω ~ i ) + Le (~x),
era position (head light). The vicinity occlusion map proposed by
Diaz et al. [9] is an adoption of screen-space ambient occlusion for where Li (~x, ω ~ i ) is the incident radiance reaching point ~x from di-
direct volume rendering. A significant drawback of this postpro- rection ω ~ i , Le (~x) is the emissive radiance and s(~x, ω
~ i, ω~ o ) describes
cessing technique is its dependency on a valid depth buffer, which the actual shading, which is dependent on both the incident light di-
restrict itself to rather opaque renditions. The obscurance shading rection ω ~ i as well as the outgoing light direction ω ~ o . Furthermore,
technique proposed by Ruiz et al. [21] is a generalization of am- ~ i, ω
s(~x, ω ~ o ) is dependent on parameters, which may vary based on
bient occlusion for illustrative purposes instead of soft shadows. ~x, as for instance the optical properties assigned through the transfer
~ i, ω
function. In the context of volume rendering, s(~x, ω ~ o ) is often where Ω is the unit sphere centered around ~x. In our model,
written as: ~ i0, ω
p(~x, ω ~ o ) is a strongly forward peaked phase function:

s(~x, ω ~ o ) = τ(~x) · p(~x, ω


~ i, ω ~ i, ω
~ o ),
ωi 0 · ω ~ i0 · ω

C · (~ ~ o )β if ω ~ o < θ (~x)
where τ(~x) represents the extinction coefficient at position ~x and ~ i0, ω
p(~x, ω ~o) =
0 otherwise.
~ i, ω
p(~x, ω ~ o ) is the phase function describing the scattering charac-
teristics of the participating medium at the position ~x. When con- Similar to the work of Kniss et al. [11], the cone angle θ is used to
sidering shadowing in this model, where the attenuation of external control the amount of scattering and depends on the scalar data set
light traveling through the volume is incorporated, the incident ra- at position~x. It is higher for less opaque voxels and can be maximal
diance Li (~x, ω ~ i ) can be defined as: π
4 . The phase function is a Phong lobe whose extent is controlled
by the exponent β , restricted to the cone angle θ . The constant C
~ i ) = Ll · T (~xl ,~x).
Li (~x, ω must be chosen with respect to β to ensure energy conservation. If
the gradient magnitude at ~x is high with respect to a user-specified
This definition is based on the standard volume rendering inte-
threshold, we have a well-defined gradient and may as well com-
gral, where the light source is located at ~xl and has the radiance Ll
pute specular reflections. This allows us to account for both surface
and T (~xl ,~x) is the corresponding transparency between ~xl and ~x.
like reflection and volumetric scattering.
According to Max, the standard volume rendering integral sim-
ulating absorption and emission can be extended with these defini- 4 L IGHT VOLUME C ALCULATION
tions to support single scattering and shadowing:
To efficiently approximate the volumetric illumination model de-
scribed in the previous section, we propose an algorithm, which
~ o ) = L0 · T (~
L(~x, ω x0 ,~x) exploits the features of current GPUs. While in standard DVR,
Z ~x the final color is obtained by incorporating emission and absorp-
+ (Le (~x) + (s(~x0 , ω ~ o ) · Li (~x0 , ω
~ i, ω ~ i ))) · T (~x0 ,~x)d~x0 , tion along a viewing ray, we exploit an alternative concept in order
x~0 to independently compute the scattering and the shadowing contri-
bution. When considering just shadowing, a straight forward exten-
with ω~ i = ~xl −~x0 , L0 being the background intensity and x~0 a sion to a GPU-based volume ray-caster already requires a substan-
point behind the volume. When extending this equation to sup- tial overhead during rendering. It would require at least 1 shadow
port multiple scattering, it is necessary to integrate the scattering ray per sample. Since the number of samples directly influences
of light coming from all possible directions ω ~ i on the unit sphere. the image quality it is often rather high. When for instance con-
However, Max has stated that this would be overkill for most scien- sidering a given image resolution of 10242 and a volumetric data
tific visualization applications [16]. Similar to Kniss et al. [11] we set having a resolution of 5123 voxels, due to the Nyquist theorem
incorporate indirect lighting by blurring the incoming light within O((512 · 2) · 10242 = 10243 = nsr ) shadow rays would be required.
a given cone centered about the incoming light direction. In con- When taking into account, that each of these nsr shadow rays needs
trast to their technique, our approach can be easily extended, to also to be sampled sufficiently high, it becomes clear that this shadowing
account for scattering that happens when the light travels towards approach results in a high performance penalty. When additionally
the eye. To achieve this, we use an additional scattering pass on simulating scattering by computing different ways of light through
the light volume, which blurs the outgoing light within a cone fac- the medium, it would become even more expensive. Therefore, we
ing away from the viewing direction. Another important difference have decided to realize the derived volumetric illumination model
compared to [11] is that we use separate blurring for the chromatic- not in image space, where a high number of sampling operations is
ity and the intensity of the light (luminance). Although such an necessary, but directly in volume space.
approach is not physically correct, the decoupling of shadow com- To compute the chromaticity c(~x, ω ~ o ), we process the volume in
putation from color bleeding provides us with additional flexibility front-to-back order from the light position and blur the chromaticity
to optimize the perception of spatial structures, which is known to of the incident light within a cone centered around the light direc-
be improved, when hard shadow borders are present [26], since hu- tion, with the tip of the cone pointing away from the light source,
mans are very sensitive in perceiving small changes in luminance, while we directly accumulate the opacity in order to compute the lu-
but not in hue [13]. To guarantee hard shadow borders, we omit the ~ i ). To also simulate the scattering, that occurs when
minance li (~x, ω
blurring of the intensity part and just apply standard linear interpo- light travels towards the eye, we can process the light volume again
lation. Thus, our illumination model can be specified as follows: in back-to-front order with respect to the viewing position and blur
Z ~x the outgoing light within a cone centered around the viewing direc-
~ o ) = L0 · T (~
L(~x, ω x0 ,~x) + ~ o )) · T (~x0 ,~x)d~x0 ,
~ i, ω
(Le (~x) + q(~x, ω tion, with the tip pointing towards the eye position. By integrating
x~0 this additional pass, pitch black shadows can be avoided in some
(1) areas resulting in a more vivid image.
where the radiance q(~x, ω ~ i, ω
~ o ) is calculated as the transport Thus, the additional performance impact becomes independent
color t(~x), as assigned through the transfer function, multiplied by of the sampling rate, and shadowing as well as scattering can be re-
~ o ) of the in-scattered light and modulated by
the chromaticity c(~x, ω alized with only O(v · a) sampling operations, where v is the num-
the attenuated luminance li (~x, ω~ i ): ber of voxels and a the size of the desired neighborhood used for
blurring the indirect achromatic light contribution. In the follow-
q(~x, ω ~ o ) = t(~x) · c(~x, ω
~ i, ω ~ o ) · li (~x, ω
~ i ). ing subsections, we will explain our voxel space realization of the
described volumetric illumination model.
The transport color t(~x) as well as the chromaticity c(~x, ω ~ o ) are
~ i ) describes
of course wave length dependent, while the scalar li (~x, ω 4.1 Light Propagation
the incident (achromatic) luminance. Multiplying all together will The proposed realization is based on an illumination volume. Al-
~ o ) is computed
result in the incident radiance. Chromaticity c(~x, ω ternatively, one could think about using a deep shadow map ap-
from the in-scattered light proach [15]. However, while deep shadow maps would allow a
Z similar shadow impression and do not need to store a volumetric
~o) =
c(~x, ω ~ i0, ω
τ(~x) · p(~x, ω ~ i 0 )d ω
~ o ) · c(~x, ω ~ i0, data set, they do not allow an easy integration of scattering and

sons, we first describe the simple case, where dir~max is along the
positive or negative z-axis, before briefly describing the permuta-
tion, which is used to extend the technique also for dir~max pointing
along the x- or y-axis.
To exploit render-to-texture functionality along the z-axis, we
render each slice from S0 to Smax with texture coordinates reach-
ing from (0.0, 0.0, zi ) to (1.0, 1.0, zi ), where zi is the z coordinate of
i
slice Si with zi = max . For each slice except S0 , we need to access
the previously computed slice. Thus, during rendering each frag-
ment corresponds to one voxel in the original data set and we can
exploit a GLSL fragment shader to compute the lighting informa-
tion as given by Equation 1. This is done based on the given cone
angle described in Section 3. To compute both the scattering color
as well as the shadowing intensity we apply back to front composit-
ing to Si and Si−1 , as known from standard DVR. When assuming
Figure 2: Light is propagated through the volume slice by slice start- that the current voxel’s color and opacity are stored as voxel.rgb
ing at the cube face F0 being closest to the light source located in resp. voxel.a, the scattering color c(~xi , ω ~ ) as well as the attenuated
~xl . ~ ) at ~xi can be computed in the shader as follows:
intensity li (~xi , ω

~ ) = (1.0 − voxel.a) ∗ c(x~i−1 , ω


c(~xi , ω ~ ) + voxel.a ∗ voxel.rgb,
have the drawback, that they incorporate a data set dependent user
defined error value. In contrast our technique does not need any ~ ) = (1.0 − voxel.a) ∗ li (x~i−1 , ω
li (~xi , ω ~ ) + voxel.a,
additional user assigned parameters.
whereby c(x~i−1 , ω ~ ) and li (x~i−1 , ω
~ ) are read around the intersec-
Our voxel space light propagation model is similar in spirit to the ~ l = ~xl − ~xi with the
shadowing approach presented by Behrens and Ratering [2]. How- tion ~pi of the current voxel’s light vector dir
ever, in contrast their approach is constrained to simulate shadows previous slice. Since in this step, we want to simulate the light
with one distant light source hitting the volume at an angle of 45 ◦ . traveling from the light source through the volume, the incident di-
Our technique is not confined in this way and therefore allows us ~ ) and the outgoing direction for c(~xi , ω
rection of li (x~i−1 , ω ~ ) are the
to capture shadowing of arbitrary directional or external point light same. Thus, we can approximate the light traveling by considering
sources as well as scattering effects, leading to more realistic ren- the emission absorption model. While li (x~i−1 , ω ~ ) may be obtained
dering results. In this subsection, we will explain our technique for by performing a single texture fetch at ~pi if we want to guaran-
an arbitrary point light source. tee hard shadows, c(x~i−1 , ω ~ ) is obtained by blending voxels in the
neighborhood of ~pi . The amount of voxels and the blending factors
In order to exploit the features of current GPUs, we approximate
are dependent on the cone angle θ . We have implemented a Phong
the described model by processing the volumetric data set slice by
lobe phase function as shown in Section 3, where each contributing
slice. Thus, we compute an illumination volume, which is updated
on the fly whenever the transfer function and/or the light position voxel is weighted with cos(α)β , where α is the angle between the
~xl has been changed. Since the shadowing as well as the scatter- light vector from ~x and the vector towards the contributing voxel,
ing effects at each position ~x depend only on those voxels lying in and β is a user specified fall-off coefficient. To achieve energy
between ~x an ~xl , it is sufficient to process the volumetric data set preservation, we further divide the computed contribution by the
starting near ~xl and proceeding in a direction facing away from the number of considered neighbors, using the previously introduced
light source to compute c(~x, ω ~ o ) and li (~x, ω
~ i ). We start at the cube constant C.
face F0 being closest to the light source and propagate illumina- In order to incorporate the scattering of the outgoing light, we
tion information along one of the major axes of the volume, i. e., can proceed similarly in a second pass. But instead of using ~xl to
x, y or z. As illustrated in Figure 2, the used axis is determined by determine F0 , we use the current camera position and set dir~max to
the principle processing direction dir~max . Determining dir~max can n~F0 before propagating illumination from back to front. During this
be thought of choosing the Fj of all cube faces F, which has the propagation we blend the outgoing chromaticity with the previously
largest projection as seen from ~xl . Since, we want to propagate the computed incoming chromaticity and choose the minimum of the
lighting information from ~xl through the volume, not all Fj come scalar luminance. While this proceeding is not physical correct, it
into account, but only those which are visible from ~xl and thus ful- leads to plausible results as discussed in Section 6.
fill the equation n~Fj · (~xl − c~Fj ) < 0, where n~Fj is the normal and c~Fj Implementing the described light propagation scheme brings up
is the center of Fj . Based on this observation, we sort the three cube some technical issues. First, as already mentioned, it is only pos-
faces being visible from ~xl , such that F0 minimizes n~Fj · (~xl − c~Fj ) sible to render into a 3D texture along the z-axis. Therefore, the
(see Figure 2). Thus F0 is the cube face with the largest and F2 is texture coordinates as assigned to the rendered slices are permu-
the cube face with the smallest projection as seen from ~xl . Since n~F0 tated based on a permutation vector, which stores for each axis in
image space, to which axis it is mapped in volume space. Since
is the normal of F0 , the principle light propagation direction dir~max this permutation is applied when generating the illumination vol-
can be set to dir~max = −n~F0 . ume, we have to apply its inverse to the texture coordinates when
Now, that we know in which direction we have to propagate the accessing this volume later on during rendering. Furthermore, for
light information through the volume, we can start at the slice S0 all texture fetches during the light propagation, it has to be ensured,
which is closest to ~xl along the axis dir~max . During the light propa- that the information on the current slice Si is not taken into account.
gation we step through all slices Si until we get to the last slice Smax . This can be either achieved by manually filtering in the xy-plane
This propagation is performed on the fly by exploiting framebuffer only, or more easily by exploiting texture arrays, which allow to
objects and render-to-texture functionality. Since rendering into a filter only within one texture slice. Finally, when applying the de-
3D texture only allows to render along the z−direction, we have to scribed light propagation algorithm in cases, where ~xl is located
apply an axis permutation during the light propagation, in order to inside the volume, the lighting information cannot be propagated
support arbitrary light positions ~xl . However, for explanatory rea- in a single direction. There are two alternative solutions for this
problem. First, the lighting information can be propagated into both ple light propagation direction has changed. Thus, the light volume
directions starting at ~xl . However, this approach may lead to notice- generation is done, whereby the first propagation is along dir~max
able discontinuities within the illumination volume. Therefore, we as described in Subsection 4.1, and the second is along dir~blend .
have decided to simply project ~xl onto F0 and apply the algorithm
Therefore, we process the volume along dir~blend and compute the
as described above in order to propagate lighting information into
scattering and shadowing along this axis. But instead of using two
direction dir~max . In our experiments this lead to results of sufficient different illumination volumes during rendering, we blend the com-
quality. puted illumination results I1 and I0 : S = α0 · I0 + α1 · I1 .
While the illumination propagation can be done rapidly, since As depicted in Figure 3, α0 and α1 are dependent on the angle
only basic operations are required, the four channel illumination ~ l and dir~max resp. γ between dir
~l
β between the light vector dir
volume might be a limitation when dealing with large data sets.
When taking into account the perceptual properties of the human and dir~blend . To achieve an appropriate blending, we have to ensure
visual sense, we are able to reduce the memory footprint of the that the sum of α0 and α1 always equals 1. Furthermore, α1 must
illumination volume. Therefore, we have exploited the fact, that be 0 around the center area at c~F0 , because otherwise popping would
chromaticity variations are less perceived than variations in lumi- occur, when moving the light source over c~F0 . Choosing α0 = 1 −
2·acos(β ) 2·acos(γ)
nance [13]. Keeping in mind, that the scattering is propagated π and α1 = 1 − π fulfills these requirements.
through the RGB channels and the attenuation through the A chan-
nel, we are able to downscale the scattering volume by remain- 5 R ENDERING
ing the original size of the attenuation part. This is a technique,
Using current GPUs for rendering with our model is straight for-
which has also been incorporated in the JPEG image compression
ward. While in principle the proposed lighting computation is ap-
technique, and it is referred to as chroma subsampling. In fact,
plicable to any volume rendering technique, we use GPU-based vol-
instead of downsampling a previously generated data set, we di-
ume ray-casting, whereby the shading is done with GLSL shaders.
rectly modify the light propagation in order to also achieve a per-
In the following, we assume the light volume has been generated
formance gain. However, since this increases the distance between
as described in the previous section and is present as an RGBA data
the slices to process, the chromasubsampling factor gcs is used
set, where the RGB channels represent the chromaticity c(~x, ω ~ ) and
to modify the opacity of the current voxel prior to compositing:
the A channel the attenuated light intensity li (~x, ω~ ), both at the cur-
color.a = color.a ∗ gcs . This avoids luminance variations when en-
rent voxel’s position ~x. Such light volumes are shown in column
abling chroma subsampling. The technique has been applied when
(a) in Figure 4. Based on the values fetched from these data sets,
rendering the salamander data set shown in Figure 4.
we solve the front-to-back compositing volume rendering integral.
4.2 Changing Light Position Thus, shading of ~x is a multiplication of the transport color t(~x),
as assigned through the transfer function, and c(~x, ω ~ ) to which we
While the light propagation technique described in the previous apply an intensity adaption based on li (~x, ω ~ ). In order to support
subsection, would work well for arbitrary but fixed light positions emission, the emissive color Le (~x) can be added. The compositing
~xl , a dynamic change may introduce problems. This is due to the along the viewing ray is not altered.
fact, that dir~max runs always along one of the major volume axis. Figure 4 shows the results, whereby (a) shows a volume ren-
When ~xl changes in a way that dir~max is also changed, the illumina- dering of the generated illumination volumes. From (b) to (d), the
tion is propagated along a different direction through the volume, following techniques are shown. In (b) only shadowing is applied,
which might lead to noticeable popping artifacts. whereby the voxel is assumed to be initially white. While we have
To deal with this problem, we introduce an additional light prop- used an RGBA light volume for all cases shown in this section, a
agation direction dir~blend . While dir~max is the light propagation di- single channel shadow volume would have been sufficient for this
rection based on the cube face F0 with the largest projection size as shadowing only case. In (c), we only show the scattering color
~ ) as computed during our volume processing. The figures la-
c(~x, ω
seen from ~xl , dir~blend = −n~F1 is based on F1 , the cube face with the
second largest projection size (see Figure 3). When blending the re- beled with (d) show the application of the entire illumination model
sults of the light propagation along these two principle axes, we can combined with a specular term. Therefore, we multiply the trans-
~ ), which is modulated based on li (~x, ω
port color t(~x) and c(~x, ω ~ ) as
avoid popping effects, since the contribution of the possibly new
shown in Equation 1. While this rendering technique generates re-
dir~max has been already taken into account, before the the princi- alistic results for diffuse and scattering materials, it is not possible
to capture materials having a high degree of specularity. Therefore,
we have combined the technique with an achromatic specular inten-
sity cs (~x) as calculated with the gradient-based Phong model. Since
we assume, that the specular intensity is not wave length dependent
~ ), and are able to modulate
we are able to multiply cs (~x) and li (~x, ω
the result by the gradient magnitude | 5 τ(~x)| in order to achieve
the desired intensity:

I = | 5 τ(~x)| · cs (~x) · li (~x, ω


~ ).

As it can be seen, surfaces appear to be vivid having a specu-


lar component, while diffuse elements are also present. Instead of
modulating cs (~x) with the gradient magnitude, it could also be mod-
ulated by the signal to noise ratio of the computed gradients. Thus,
it would become possible to emphasize surfaces in an otherwise
noisy data set.
It should be pointed out, that while we can use a single texture
Figure 3: To avoid popping artifacts, when the main propagation axis ~ ) as well as li (~x, ω
fetch in order to obtain c(~x, ω ~ ) in the general case,
dir~max is changed, we employ a blending between the two principal we need to apply two texture fetches when exploiting the chroma
light directions given by F0 and F1 . subsampling introduced in Section 4. One in the original shadow
(a) illumination volume ~)
(b) li (~x, ω ~)
(c) c(~x, ω (d) full shading

Figure 4: The generated illumination volume (a) can be used to achieve different rendering effects: rendering with the attenuated light intensity
~ ) (b), with the scattering chromaticity c(~x, ω
li (~x, ω ~ ) (c) and with full shading as defined by our illumination model (d).

ratio (see Figure 6 (left)). When instead applying our technique to


the same data set (see Figure 6 (right)), structures start to emerge
and the spatial comprehension becomes more easy.
To analyze the performance of our algorithm, we have measured
the fps over 1000 frames at a resolution of 1024 × 1024, where the
rendered object is rotated around its up-vector. With this scenario,
we have compared three different cases: Phong shading, and our
technique with and without light source movement. For the Phong
shading we have exploited forward difference gradients computed
on the fly and have considered ambient, diffuse and specular il-
lumination. Table 1 shows the results, we could achieve on a stan-
dard desktop computer, having an Intel Core2 Duo CPU E8400 run-
Figure 5: Comparison of an MRT scan of a human head (256 × 256 × ning at 3.00 GHz, 4 GB of main memory, and an nVidia GeForce
256 voxels) rendered with gradient-based Phong illumination (left) GTX285 graphics board. As it can be seen, for most cases our tech-
and with our technique (right). The scattering contribution results in a nique results in higher frame rates than conventional shading. Only
more realistic appearance of the skin, while the shadowing supports when recomputing the illumination volume, the frame rates drop
the spatial comprehension. below those achieved with gradient-based shading. The only case,
where our technique was slower in both cases is the torso data set.
Since our volumetric illumination technique only approximates
volume and one being subject to a texture coordinate transformation scattering and shadowing, we have decided to conduct a user study
into the scattering volume having a lower resolution. in order to assess its usefulness. To realize this, we have imple-
mented an applet, with which each participant had to accomplish
6 R ESULTS AND D ISCUSSION three different tests. Since the usefulness of a rendering algorithm
Besides the application to CT data sets as shown in Figure 1 and is always dependent on the application case in which it is utilized
Figure 4, we have also applied our algorithm to other modalities and therefore hard to judge in general, we have decided to focus pri-
having a lower signal to noise ratio. Figure 5 shows the appli- marily on the facilitation of depth perception. This decision is based
cation to an MRT scan of a human head having a resolution of on the assumption, that depth perception is essential for most 3D
256 × 256 × 256 voxels. Compared to the gradient-based shading viewing tasks. In each of the three tests, the participants have been
(left), our algorithm (right) results in a more realistic representation confronted with numerous rendering results generated either with
of the human skin, since scattering effects are taken into account. the technique described in this paper or alternatively with gradient-
Furthermore, the shadowing gives the image more depth, which is based Phong illumination. To achieve better comparability, each
especially visible below the nose and in the region around the eyes. image has been incorporated twice into each test series, once ren-
We would like to point out the rather hard shadow borders, which dered with our model and once with Phong. Besides from the ren-
would not be present when just applying scattering. While high dering technique, all other parameters, e. g., transfer function as
quality shading of MRT data with conventional shading techniques well as camera and light position, have been left unchanged. In the
is already challenging, it becomes nearly impossible when applying first test, the depth comparison test, 38 images have been shown
it to 3D US data, which suffers from an even lower signal to noise to each participant. The goal of this test was to analyze relative
depth perception. Therefore, two regions have been emphasized in
each image by using circular markers and the user has been asked
to click on the region, which is closer to the viewer. During this test
we have measured the time used for each image as well as whether
the correct region has been selected. In the second test, the depth
estimation test, we wanted to judge absolute depth perception. To
do so, we have emphasized only one region in each of the 14 shown
images by using a spherical marker. In this test, the participants
had to estimate the depth of the emphasized region in percentage
of the overall depth extend of the shown object. The participants
could input the estimated depth by using a slider as shown in Fig-
ure 7. In this test we have measured the difference between the real
depth as computed via volume ray-casting and the estimated depth
Figure 7: During the depth estimation test, the users had to estimate
as transmitted using the slider. Additionally, we have also measured
the depth of the highlighted region by adapting the slider, represent-
the time used for each image. As a third test, we have conducted
ing the entire depth extend of the shown object, accordingly.
a very subjective image comparison test, where we have shown 19
pairs of images to each participant. Again, both images where iden-
tical despite the used rendering technique. The subjects have been
asked to select that image from the pair, which they find more visu- the depth estimation test the average depth estimation was more ac-
ally pleasing. While the order of the three tests has been fixed for curate and it was less time used. However, based on the rather low
each participant (depth comparison, depth estimation, image com- number of subjects and images, we could not show a high signifi-
parison), the order of the displayed images has been randomized in cance. Finally, the results of the third test have shown, that our vol-
order to reduce order-dependent side effects. In addition, each test umetric illumination model is perceived as more aesthetic in 72.4%
started with a simple example in order to make the participants fa- of all shown cases.
miliar with it. Since these examples where unambiguous, we have
7 C ONCLUSIONS AND F UTURE W ORK
decided to omit the results for one test, if the participant failed do-
ing the example correctly. This was never the case. In this paper we have proposed a volumetric lighting model sup-
16 (age 22-33, ∅ : 27) subjects participated in the study. Most porting scattering as well as shadowing effects, which allows in-
subjects were students or members of the departments (computer teractive frame rates when implemented on current graphics hard-
science, mathematics, geoinformatics, and physics). All had nor- ware. We have derived the model and described our implementa-
mal or corrected to normal vision; 5 wear glasses or contact lenses. tion based on illumination propagation. While the model is simi-
The time per subject including a short pre-questionnaire, reading lar in spirit to the model proposed by Kniss et al. [11], it requires
the instructions, training, and experiment took approximately 8 fewer sampling operations for the illumination calculation, since
minutes, where in total 94 images have been processed in the three this is performed in volume space. This allows an easy integration
different tests. The results of the depth comparison test are shown into current state-of-the-art GPU-based volume ray-casters, which
in Figure 8. In the top left bar chart, the average time used for is not possible with the half-angle technique proposed by Kniss et
conventional shading (3935.4 ms) as compared to our technique al.. Based on our implementation, we have shown several success-
(3250.5 ms) is shown. The lower left plot shows the percentage of ful application examples also including modalities with a low sig-
correct depth decisions as based on conventional shading (71.4%) nal to noise ratio, as 3D US and MRT. Since the presented model
and our technique (94.2%). For both these results, we have per- and its implementation are physically-motivated but not physically-
formed a t-test and we found a significant (ρ < 0.01) increase in correct, we have conducted a user study, with which we have com-
accuracy as well as speed, when using our technique for the rela- pared our technique to conventional gradient-based shading. Based
tive depth perception in the tested cases. This is also reflected in on the result of three different tests, we were able to show, that the
the scatter plot shown in Figure 8 (right), where the average time proposed model is able to improve the depth perception of volume
needed and the percentage of correct depth decisions are plotted for renderings. Furthermore, the technique is very easy to implement,
each image. It can be clearly seen, that the images generated with and can be easily combined with clipping planes and arbitrary clas-
our technique form a cluster in the top left region. Additionally, in sification techniques, while providing in many cases higher frame
rates than gradient-based shading. This allows full control over all
relevant rendering parameters as transfer function, lighting param-
eters and camera view. Therefore, we believe it has the potential to
be integrated in other volume rendering frameworks.
Since we could show that the approximative nature of the model

Table 1: Performance of our technique with and without light volume


regeneration as compared to local Phong illumination.

data set size Phong our technique


(voxel3 ) no regen. regen.
mouse 360 × 270 × 550 12.6 fps 16.5 fps 7.2 fps
Figure 6: A 3D US scan of a human heart (148 × 203 × 149 voxels) torso 378 × 346 × 445 11.3 fps 9.5 fps 4.2 fps
rendered with gradient-based Phong illumination (left) and with our salamander 512 × 512 × 202 9.4 fps 17.3 fps 5.5 fps
technique (right). Due to the low signal to noise ratio, spatial com- MRT head 256 × 256 × 256 22.6 fps 25.0 fps 12.2 fps
prehension is difficult when using gradient-based shading. Instead, 3D US heart 148 × 203 × 149 24.0 fps 31.4 fps 14.3 fps
when using our technique, structures emerge.
100
ume rendering. In Proceedings of the IEEE/EG International Sympo-
sium on Volume Graphics (VG), pages 1–8, 2007.
90
[8] F. Hernell, P. Ljung, and A. Ynnerman. Interactive global light prop-
80
agation in direct volume rendering using local piecewise integration.

average % of correct responses


70 In Proceedings of the IEEE/EG International Symposium on Volume
60
and Point-Based Graphics (VG), pages 105–112, 2008.
[9] P. V. J. Daz, H. Yela. Vicinity occlusion maps - enhanced depth per-
50
ception of volumetric models. In Proceedings of Computer Graphics
40
International (CGI), pages 56–63, 2008.
30 [10] J. Kniss, S. Premoze, C. Hansen, and D. Ebert. Interactive translucent
20
volume rendering and procedural modeling. In Proceedings of IEEE
conventional shading Visualization 2002, pages 109–116, 2002.
10
our technique
[11] J. Kniss, S. Premoze, C. Hansen, P. Shirley, and A. McPherson. A
0 model for volume lighting and modeling. IEEE Transactions on Visu-
2000 2500 3000 3500 4000 4500 5000
average elapsed time (in ms)
alization and Computer Graphics, 9(2):150–162, 2003.
[12] M. S. Langer and H. H. Bülthoff. Depth discrimination from shading
under diffuse lighting. Perception, 29(6):649–660, 2000.
Figure 8: The results of the conducted depth comparison test. The
[13] M. Livingston. Vision and Art: The Biology of Seeing. Harry N.
plots on the left show the average time needed and the accuracy
Abrams, New York, 2002.
of depth decisions, while the scatter plot shows all images plotted
[14] P. Ljung, F. Hernell, and A. Ynnerman. Local ambient occlusion in di-
based on their average time and accuracy. As it can be seen, images
rect volume rendering. IEEE Transactions on Visualization and Com-
generated with our technique form a cluster towards more quick and
puter Graphics, 15(2), 2009.
accurate depth decisions.
[15] T. Lokovic and E. Veach. Deep shadow maps. In SIGGRAPH ’00:
Proceedings of the 27th annual conference on Computer graphics and
interactive techniques, pages 385–392, New York, NY, USA, 2000.
still allows improved spatial comprehension, the major drawback ACM Press/Addison-Wesley Publishing Co.
is the usage of an additional illumination volume. However, by [16] N. Max. Optical models for direct volume rendering. IEEE Transac-
exploiting chroma subsampling, the situation can be improved. In tions on Visualization and Computer Graphics, 1(2):99–108, 1995.
addition to that, we are positive, that with the increasing memory [17] A. Mendez, M. Sbert, and J. Cata. Real-time obscurances with color
size of graphics hardware, this will become even less important. bleeding. In Proceedings of the Spring Conference on Computer
In the future, we would like to extend our technique in order to be Graphics (SCCG), pages 171–176, 2003.
able to integrate area light sources. Since our algorithm propagates [18] E. Penner and R. Mitchell. Isosurface ambient occlusion and
illumination information from the outside of the volume inwards, it soft shadows with filterable occlusion maps. In Proceedings of
could be possible to project area light sources onto the border of the the IEEE/EG International Symposium on Volume and Point-Based
Graphics (VG), pages 57–64, 2008.
volume in order to achieve the desired effect. Additionally, it would
[19] T. Ropinski, J. Kasten, and K. H. Hinrichs. Efficient shadows for
be interesting to investigate the influence of multiple light sources.
GPU-based volume raycasting. In Proceedings of the International
On the evaluation side, we would like to compare the shadowing to Conference in Central Europe on Computer Graphics, Visualization
the ground truth and evaluate the influence of chroma subsampling. and Computer Vision (WSCG), pages 17–24, 2008.
[20] T. Ropinski, J. Meyer-Spradow, S. Diepenbrock, J. Mensmann, and
ACKNOWLEDGEMENTS K. H. Hinrichs. Interactive volume rendering with dynamic ambient
This work was partly supported by grants from the Deutsche occlusion and color bleeding. Computer Graphics Forum (Eurograph-
Forschungsgemeinschaft (DFG), SFB 656 MoBil Münster, Ger- ics 2008), 27(2):567–576, 2008.
[21] M. Ruiz, I. Boada, I. Viola, S. Bruckner, M. Feixas, and M. Sbert.
many (project Z1). The presented concepts have been integrated
Obscurance-based volume rendering framework. In Proceedings of
into the Voreen volume rendering engine (www.voreen.org).
the IEEE/EG International Symposium on Volume and Point-Based
Graphics (VG), pages 113–120, 2008.
R EFERENCES [22] C. R. Salama. GPU-based monte-carlo volume raycasting. In Pro-
[1] K. M. Beason, J. Grant, D. C. Banks, B. Futch, and M. Y. Hussaini. ceedings of Pacific Graphics (PG), 2007.
Pre-computed illumination for isosurfaces. In Proceedings of the Con- [23] M. Schott, V. Pegoraro, C. Hansen, K. Boulanger, and K. Bouatouch.
ference on Visualization and Data Analysis (VDA), pages 1–11, 2006. A directional occlusion shading model for interactive direct volume
[2] U. Behrens and R. Ratering. Adding shadows to a texture-based vol- rendering. In Computer Graphics Forum (Proceedings of Eurograph-
ume renderer. In Proceedings of the IEEE International Symposium ics/IEEE VGTC Symposium on Visualization 2009), pages 855–862,
on Volume Visualization, pages 39–46, 1998. 2009.
[3] D. Dalby. Biological Illustration : A Guide to Drawing for Reproduc- [24] A. J. Stewart. Vicinity shading for enhanced perception of volumetric
tion. Field Studies Council, 1980. data. In Proceedings of IEEE Visualization (VIS), page 47, 2003.
[4] P. Desgranges, K. Engel, and G. Paladini. Gradient-free shading: A [25] J. Sun and P. Perona. Where is the sun? Nature Neuroscience,
new method for realistic interactive volume rendering. In Proceedings (1):183–184, 1998.
of the International Fall Workshop on Vision, Modeling, and Visual- [26] L. Wanger. The effect of shadow quality on the perception of spatial
ization (VMV), pages 209–216, 2005. relationships in computer generated imagery. In SI3D ’92: Proceed-
[5] Y. Dobashi, T. Yamamoto, and T. Nishita. Interactive rendering of at- ings of the 1992 symposium on Interactive 3D graphics, pages 39–42,
mospheric scattering effects using graphics hardware. In Proceedings 1992.
of the ACM SIGGRAPH/EG Conference on Graphics Hardware (GH), [27] C. Zhang and R. Crawfis. Volumetric shadows using splatting. In
pages 99–107, 2002. Proceedings of IEEE Visualization (Vis), pages 85–92, 2002.
[6] M. Hadwiger, A. Kratz, C. Sigg, and K. Bühler. GPU-accelerated [28] C. Zhang and R. Crawfis. Shadows and soft shadows with participat-
deep shadow maps for direct volume rendering. In Proceedings of ing media using splatting. IEEE Transactions on Visualization and
the ACM SIGGRAPH/EG Conference on Graphics Hardware (GH), Computer Graphics, 9(2):139–149, 2003.
pages 49–52, 2006. [29] S. Zhukov, A. Iones, and G. Kronin. An ambient light illumination
[7] F. Hernell, P. Ljung, and A. Ynnerman. Efficient ambient and emis- model. In Proceedings of the EG Workshop on Rendering (EGRW),
sive tissue illumination using local occlusion in multiresolution vol- pages 45–55, 1998.

You might also like