Lighting TF
Lighting TF
A BSTRACT ues and visual attributes used for rendering. The ability to control
this mapping allows a scientist to add emphasis to those materials
An important task in volume rendering is the visualization of of interest, while removing those materials that could obscure or
boundaries between materials. This is typically accomplished us- detract from the visualization. Often, it is desirable to enhance the
ing transfer functions that increase opacity based on a voxel’s value boundaries between materials in a volume, and methods have been
and gradient. Lighting also plays a crucial role in illustrating sur- developed which extend the number of transfer function inputs to
faces. In this paper we present a multi-dimensional transfer func- include neighboring voxel’s characteristics. These methods have
tion method for enhancing surfaces, not through the variation of used the extended domain of the transfer function to more easily
opacity, but through the modification of surface shading. The tech- specify the opacity of material boundaries. The appearance of a
nique uses a lighting transfer function that takes into account the surface however, is not only a function of the material color and
distribution of values along a material boundary and features a opacity, but also the reflection of light. Lighting plays a critically
novel interface for visualizing and specifying these transfer func- important role in illustrating surfaces; in particular, lighting varia-
tions. With our method, the user is given a means of visualizing tions provide visual cues regarding surface orientation.
boundaries without modifying opacity, allowing opacity to be used Most research in the use of higher dimensional transfer functions
for illustrating the thickness of homogeneous materials through the has focussed on the specification of color and opacity, despite the
absorption of light. fact that transfer functions can be used for specifying of any of a
CR Categories: I.3.6 [Computer Graphics]: Methodology and number of optical properties [6] including surface illumination. In
Techniques—Interaction techniques; I.3.7 [Computer Graphics]: this paper, we present a novel method for specifying the appear-
Three-Dimensional graphics and realism—color, shading, and tex- ance of surface illumination using a higher-dimensional transfer
ture function, with an input domain consisting of samples read along
the gradient direction. The technique provides users with precise
Keywords: direct volume rendering, volume visualization, multi- control over how lighting is used to illustrate the different material
dimensional transfer functions, shading, transfer functions boundaries in a volume. In addition, we discuss a novel visual in-
terface for specifying lighting transfer functions that is designed to
1 I NTRODUCTION improve the user’s intuition as to where material transitions occur
with respect to scalar value.
Much of the expressiveness of direct volume rendering comes from In the real world, lit surfaces are typically seen where there is a
the transfer function, which refers to the mapping between data val- transition between transparent and opaque material, such as a solid
object in air. However, lit surfaces can also appear at the boundary
between two similarly transparent materials such as at the bound-
IEEE Visualization 2004 ary between glass and air. Our method allows the user to position
October 10-15, Austin, Texas, USA lighting at surfaces independent of the opacity at a surface. This
0-7803-8788-0/04/$20.00 ©2004 IEEE can be contrasted with previously developed methods that enhance
289
the appearance of surfaces through the manipulation of opacity. By that a photon collides with a particle for some unit depth. This ex-
avoiding the use of opacity for illustrating surfaces, opacity-based plains the α and (1 − α ) terms in the discrete back-to-front volume
attenuation can be reserved for indicating material thickness and rendering compositing relation
depth. Furthermore, our method can be used to illustrate the sur-
faces of materials of extremely low opacity through lighting. The Cout = α C + (1 − α )Cin
use of such low opacity permits underlying material to be less ob-
where Cout is the color of the ray leaving a sample location, Cin is
scured by light absorption. An example of an engine rendered with
the color entering the sample location, C is the color of the sample
our technique is shown in Figure 1(d).
location, and α is that sample’s opacity.
In this paper we also describe how the method can be used to give
When a ray is cast through a volume, some portion of light α col-
a high degree of control over the lighting of near-homogeneous ma-
lides with particles, yielding color from scattering. The remaining
terials in both pre- and post-filtered data space. We discuss how our
light (1 − α ) continues through the volume, revealing additional
technique can make use of a hybrid between pre-filtered and post-
material. The fact that alpha appears in both terms of this equa-
filtered gradient calculations, and how it is able to take advantage
tion is a direct consequence of using a model that only accounts
of the favorable attributes of both methods. Finally, we describe an
for fully opaque particles. This restricts how opacity modifies the
efficient hardware implementation.
appearance of material in volume visualization. For example, in-
Our method takes advantage of lighting characteristics that are creasing the opacity at a material’s surface to increase surface vis-
not accounted for in the conventional model used in direct volume ibility might have the undesirable effect of also obscuring the ma-
rendering, namely, the variation of surface reflectivity independent terial behind it. Furthermore, increasing the opacity at a material’s
of opacity. Our method was not designed with the goal of creating boundary is often not physically valid. A large class of transparent
physically realistic images, but to incorporate additional lighting or semitransparent materials such as glass and water have visible
principals in a manner that can be easily controlled by the user to surfaces, not because their surfaces are opaque, but because of the
enhance the appearance of those materials of most interest. increased reflectivity at their material boundaries. Surface reflec-
tivity results from a variation in the index of refraction as approx-
2 BACKGROUND imated by the Fresnel equations. A variation in the index refrac-
tion, although related to material density, can occur independent of
The seminal works of Levoy [12] and Drebin et al. [2] describe a variation in light obstruction.
methods for the direct volume rendering of scientific data. Al- Enhancing opacity at material boundaries is analogous to ren-
though sharing many similarities, these two works have several dering frosted glass. And while it can provide a powerful means
subtle but important differences with regard to how shading is per- of viewing surfaces, it also limits how opacity can be used. The
formed. Levoy argues one of the key features of direct volume method we present uses a second two-dimensional lighting trans-
rendering is that unlike polygon isosurface rendering, shading can fer function to decouple the specification of the light emitted at a
be performed independent of classification. He makes use of the sample location through scattering from the occlusion of light from
Phong shading model with normals computed directly from the further materials through absorption.
gradient of the scalar field using central differences. In scientific
visualization, direct volume rendering illumination is typically per- 3 R ELATED W ORK
formed in this manner. Levoy also describes the enhancement of
boundary surfaces by making them more opaque depending on the Kindlmann and Durkin [7] describe joint 2D scalar/gradient mag-
magnitude of the gradient vector. nitude histograms for the semi-automatic specification of trans-
Drebin et al. [2] map voxel scalar values to density values that are fer functions that enhance surface opacity based on scalar value
computed as a result of classification, but are separate from opacity. and gradient magnitude. In a later work, Kniss et al. [9] present
In contrast to the method described by Levoy, the normals used for a widget-based interface for the intuitive specification of higher-
lighting are computed from the gradient of scalar values after apply- dimensional opacity/color transfer functions and describe how
ing the density table lookup. Thus they make use of post-classified these higher-dimensional transfer functions can be implemented ef-
gradients rather than pre-classified gradients. Further, they describe ficiently in graphics hardware. They also describe the rendering of
the modulation of lighting based on gradient, which has the effect shadows by accumulating attenuated light in an off-screen render-
of not illuminating homogeneous materials in post-classified data ing buffer. They avoid the shading of homogeneous material using
space. scalar gradient magnitude. Our work differs from theirs in that their
The computation of gradients from the derived density volume work focused on giving users a high degree of control over opacity,
as described by Drebin et al. has the advantage of more closely fol- with efficient conventional lighting. Our work focuses exclusively
lowing the physical interaction of light in a volume density, but re- on giving users expressive control over lighting, with a new user
quires the recomputing of gradients when the classification function interface geared for specifying the shading for different material
is changed. The computation of gradients prior to classification, on boundaries.
the other hand, yields normal directions that are more closely tied Much work has been done on using more sophisticated volumet-
to the original data, and are not influenced by potential errors in ric lighting models for more realistic rendering of natural phenom-
classification. Furthermore, these normals can be generated in a ena [1, 5, 13, 18, 4, 15]. In addition, some work has been done in
preprocessing step, and do not need to be recomputed if the clas- applying more realistic models for visualizations. Krueger [11] ap-
sification function is changed. The fact that these gradients do not plies transport theory to volume rendering using a simulation of par-
necessarily match the densities used during rendering, however, can ticles that have various lighting interactions with a data set. Rodg-
yield normals that are oriented in directions contrary to the densi- man and Chen [17] allow for transfer functions that include index
ties used during rendering, particularly when regions of high data of refraction to model the effect of the bending of light due to re-
value are mapped to low opacity. fraction. Noordmans et al. [16] model spectral changes in the color
The particle models used in volume rendering make the assump- of light as it interacts with material and allow for chromatic opac-
tion that a volume consists of a distribution of small, fully opaque, ity, rather than a single scalar alpha. Kniss et al. [10] use graph-
particles of varying densities. Max [14] provides an in-depth tuto- ics hardware to efficiently generate highly realistic renderings that
rial on this model and its assumptions. The alpha (opacity) term model the effects of volumetric shadows, forward scattering, and
used in the volume rendering can be thought of as the probability chromatic attenuation. In contrast, the primary goal of our work
290
is not to create more realistic images, but to provide control over read along the gradient direction. The color of a rendered sample
the use lighting to generate imagery better suited for illustrating the therefore becomes
specific structures of interest to scientists.
Kindlmann and Weinstein [8] use illumination as a means of il- C = ColorTF(S)(LTFka (S1 , S2 ) + LTFkd (S1 , S2 )MAX(N · L, 0))
lustrating tensor fields. In addition to the use of color and opacity +LTFks (S1 , S2 )MAX((N · R)n , 0)
transfer functions, they describe the idea of using lit-tensors to in-
dicate the type and orientation of anisotropy. Hauser et al [3] de- where LTFka (), LTFkd () and LTFks () are lookup tables for the ambi-
scribe a multi-level volume rendering method that simultaneously ent, diffuse and specular lighting coefficients respectively.
uses different rendering techniques, such as direct volume render- The two samples read along the gradient direction are well-suited
ing and maximum intensity projection, to illustrate different objects inputs for our lighting transfer functions since they provide an indi-
in volumetric data set. cation of whether a material boundary occurs at a given sample and
which materials exist on each side of that boundary. For example,
if the two samples have the same value, then the center sample po-
4 L IGHTING T RANSFER F UNCTIONS
sition is likely to be in a homogeneous region. Furthermore, if the
The higher-dimensional transfer function user interface described scalar values above and below the current sample belong to two dif-
by Kniss et al. [9] uses transfer function widgets shown above a ferent materials, then the sample is likely at a material boundary. In-
joint 2D scalar value/gradient magnitude histogram. This type of tuition as to why this approach works can be gained by further con-
2D histogram for the engine data set in Figure 1 is shown in Fig- sidering the nature of the 2D histogram material boundary arches
ure 2(a) with the horizontal axis used for scalar value and the ver- described by Kindlmann and Weinstein. In their work, they define
tical value for gradient magnitude. The low gradient groupings material boundaries as the finitely thin transitional regions between
correspond to different homogeneous regions in the volume. As homogeneous materials and explain that due to the band-limited
described by Kindlmann and Durkin [7], the arches between these nature of most data acquisition systems, a reconstructed volume
groups are the boundaries between the different materials in a vol- will always contain a degree of blurring. They demonstrate that
ume. In our work, we would like to give the user the ability to if a volume is reconstructed with a Gaussian reconstruction ker-
enhance the boundary between any pair of materials, which cor- nel, the transitions between idealized homogeneous regions have
responds to an arch on this joint 2D histogram. This is difficult highest gradient values at the center of the boundary transition and
since the arches intersect, a phenomenon that occurs frequently in diminish to zero moving away from the boundary. They illustrate
the regions of low gradient above homogeneous regions on the 2D this is true in practice with a variety of data sets that have 2D joint
histogram. scalar/gradient magnitude histograms containing distinct arches be-
Kindlmann and Durkin use the second derivative along the gradi- tween scalar values corresponding to homogeneous regions. For a
ent direction to further disambiguate boundaries since an idealized boundary between two materials, an arch starts with reduced gra-
boundary has a high gradient and zero second derivative. The ac- dient magnitude in a homogeneous region, increases to a higher
curacy of the calculated second derivative, however, is highly sus- gradient magnitude between the two materials, and then recedes
ceptible to noise. Furthermore, the combined scalar value, gradient moving toward the other homogeneous region.
magnitude, and second derivative along the gradient direction do Rather than relying on scalar value and gradient to identify ma-
not provide a direct means of distinguishing the different types of terial boundary transitions, our method uses two scalar values read
material boundaries in a volume. With this in mind, we make use of on both sides of a rendered sample along the gradient direction.
a lighting transfer function that takes into account data values per- This provides a more direct means of selecting the various bound-
pendicular to a possible surface at a sample, which is accomplished aries in a volume by essentially following the arches and reading
by sampling points along a sample’s normalized gradient direction. the homogeneous scalar values on both sides of a material bound-
As illustrated in Figure 3, for each sample two additional scalar ary. Consider the case where the two supplemental scalar samples
values are used: one in the direction of the gradient and one in the are separated by a distance greater than the thickness of a material
opposite direction. The results are three samples perpendicular to boundary. In this case, if the gradient direction is perpendicular to
the direction of a possible boundary surface. The center sample is the material boundary, the scalar value pair will have the same val-
used with a 1D scalar transfer function for the assignment of mate- ues throughout a material boundary, with their values equal to the
rial color and opacity. The two samples along the gradient, above scalar values on each side of the boundary. By following the gradi-
and below, are used as input to a two-dimensional lighting transfer ent direction, this method is able to read from within the adjacent
function for the assignment of ambient, specular and diffuse light- homogeneous regions for the classification of these transitions. The
ing coefficients. technique is sensitive to the thickness of a material boundary. Since
With shading, the color of a rendered sample can be expressed the scalar value pairs are read during rendering time, the distance
as between sample pairs is a parameter that can be adjusted depend-
ing on the amount of blur present in a volume. For the data sets
C = Cvoxel (Ia + Id ) + Is used in our work, we found reading a distance of one voxel in each
direction to be sufficient.
where Ia , Id , and Is are the ambient, diffuse, and specular illumi-
nation of that sample. When a transfer function colormap is used 4.1 User Interface
along with the Phong shading model, this can be expanded to
Displaying a histogram of a transfer function’s domain can pro-
C = ColorTF(S)(ka + kd MAX(N · L, 0)) + ks MAX((N · R)n , 0) vide insight into the structures found in the volume and make trans-
fer function specification more intuitive. Unfortunately, plotting
where S is the sample’s scalar value, ColorTF() is the transfer func- the scalar values pairs read along the gradient direction yields the
tion colormap lookup table, N is the normalized gradient direction, rather un-intuitive 2D histogram shown in Figure 2(b). Interpret-
L is the light direction, R is the reflected light direction, n is a specu- ing this histogram is made more difficult by the fact that both the
lar “shininess” exponent, and ka , kd and ks are the ambient, diffuse, vertical and horizontal axis correspond to the same type of data,
and specular lighting coefficients respectively. With our lighting namely scalar value. This can be contrasted with a scalar/gradient
transfer function method, the lighting coefficients are replaced with magnitude histogram which plots two distinctly different proper-
lookup tables that are functions of the two scalar values, S1 and S2 , ties. The band along the diagonal corresponds to homogeneous
291
Scalar 1
Gradient Magnitude
Scalar 1
Scalar Scalar 2 Scalar 2
(a) (b) (c)
Figure 2: In (a), the joint scalar value/gradient magnitude 2D histogram of the engine data set is shown. The joint 2D histogram of scalar
values pairs sampled along the gradient direction is shown in (b). The final image contains the line-based histogram of scalar value pairs used
in our work. The vertical bands show homogeneous regions, while the diagonal lines indicate material boundaries.
Material 1
Material 2
Figure 4: The user interface for our lighting transfer functions con-
Figure 3: The two-dimensional lighting transfer functions in our work sists of widgets for specifying pairs of 1D transfer functions that
use as input scalar value pairs sampled along the gradient direction indicate the intensity range for each side of a material boundary of
as shown in green. The center voxel samples used for the color interest. The selected boundary ranges are shown above the line-
and opacity transfer function are shown in blue, while the gradient based boundary histogram.
direction is illustrated in red. The black line illustrates a ray being
cast through a volume.
Scalar 2 Scalar 1 Scalar 2 Scalar 2
+ =
Scalar 1
material, while vertical deviations illustrate boundaries. The fact
that this vertical deviation is not perpendicular to the diagonal axis
Scalar 1
292
Figure 6: These examples illustrate how different material boundaries can be selected with our user interface and lighting transfer functions.
the gradient magnitude of that scalar data. Drebin et al.[2] attenuate precomputed but can result in directions that point opposite to the
surface lighting based on the gradient magnitude of post-classified classified opacity distribution. We use the additional scalar values
density. With our technique, the user can quickly specify a lighting read along the gradient direction to implement a hybrid between
transfer function that attenuates using either measure of homogene- the two techniques. First, the gradients of the original scalar field
ity. By not selecting vertical lines in the lighting transfer function, are precomputed. The two additional scalar values are read along
the user can attenuate surface lighting of homogeneous material in this linearly interpolated normalized direction. Then, based on the
data space. By not specifying lines that link regions with similar sign of the difference between the post-classified sample densities,
opacity, attenuation of lighting based on post-classification can be the direction of that gradient is adaptively flipped to point in the
performed. It should be emphasized that the main power of our direction of maximum variation in post-classified density. This dif-
technique lies not in its control over attenuating light in homoge- ference is precomputed and stored in the alpha channel of the same
neous regions, but rather in its use in enhancing lighting at those key 2D lookup table used for the lighting transfer function. The gradi-
boundaries of interest. These boundaries can even occur at subtle ent directions computed with this method are not equivalent to post-
variations in scalar values that are near homogeneous in both scalar classified gradients since pre-classified and post-classified gradients
and classified data space. are rarely parallel. Figure 7(a) shows a tooth rendered with stan-
dard pre-classified gradients and single-sided lighting. The dentin
4.2 Hardware Implementation structure is not lit since it is a region of relatively low scalar value,
resulting in normals oriented inward. In Figure 7(b) the same nor-
We implemented our technique in graphics hardware using view- mals are used with two-sided lighting. The dentin is properly lit, but
aligned textured polygons [19]. The additional rendering computa- some back-facing surfaces are lit as well, as indicated by the arrow.
tion is implemented using a fragment program. During rendering, In Figure 7(c) single-sided lighting is used with our hybrid gradi-
each sample’s scalar value and gradient are read. The gradient is ent computation technique. The dentin is lit with normals that have
normalized, and the pair of additional scalar values along the gradi- been flipped away from its surfaces, while the back-facing surfaces
ent direction are read. The center scalar value is used for a conven- are not lit.
tional opacity/color transfer function lookup, while the scalar value
pair read along the gradient are used as texture indices into the 2D
lighting transfer function. These lighting coefficients are used to 5 R ESULTS
modulate specular and diffuse lighting read from an environment
map. We experimented with our method using a wide variety of volu-
The fact that our method uses lighting to illustrate highly translu- metric data sets generated from both measurement and numerical
cent material requires slight modifications to accommodate higher simulation. Figure 1 contains images generated from a CT scan of
dynamic range lighting. Specifically, since the surfaces rendered an engine. Notice in Figure 1(a) that with uniform opacity and the
with our method do not have enhanced opacity, it is often necessary absence of lighting in the brown region, very few of the inner struc-
that the alpha multiplied RGB color at a surface sample exceed its tures are visible. With the addition of lighting in Figure 1(b) the
alpha value. The unmodulated color for a voxel must therefore be surface structures become more visible, but the erratic variations of
larger than one, the intensity often treated as brightest for hardware lighting in near homogeneous regions make these boundaries more
implementations. A light scaling factor that can be larger than one difficult to see. By increasing the opacity at these boundaries and
is therefore passed to the fragment program that modulates all light reducing the opacity in the homogeneous regions in Figure 1(c),
intensities. Compositing RGB colors larger than alpha also requires the boundary surface are clearly visible. However, the sense of
alpha modulation of the source color to be done in the fragment pro- material thickness from the uniform absorption of light in homo-
gram rather than in the framebuffer blending stage. It is physically geneous regions is lost. Our lighting transfer function technique
valid to composite colors that are brighter than the alpha value. Re- is illustrated in Figure 1(d), which uses the same opacity and color
flective water and flames are two physical phenomena that can re- transfer function as (a) and (b). Notice how lighting is used to selec-
flect or emit large amounts despite their low absorption. tively enhance boundary surfaces of interest, while uniform opacity
As discussed previously, gradient directions used for lighting are is used to illustrate material thickness.
typically computed from the unclassified scalar data and not post- In Figure 8 an electron probability distribution of a protein
classified density as described by Drebin et al. [2]. Pre-classified molecule is shown. In Figure 8(a) lighting is not used. The blue
gradients are more closely tied with the original data and can be and yellow regions have the same uniform opacity, which allows
293
(a)
294
Figure 10: With our technique, lighting parameters can be manipulated independent of opacity. Notice that despite variation in opacity in the
three images, the surface in green remains visible due to surface shading.
fined.
Much like other higher dimensional transfer function methods,
our technique can add high frequency color variations that result
in integration artifacts during rendering. Rather than color varying
as a function of one variable (scalar value), the composited color
varies as a result of the two additional scalar values read along the
gradient direction. In practice we found that with some over sam-
pling and the use of smoothly varying lighting transfer functions,
artifacts from integration errors were not objectionable.
Our method shares a number of similarities to the use of scalar
value/gradient magnitude pairs for transfer function specification.
If one considers that gradient magnitude provides a measurement
Figure 9: The left image of the vortex uses opacity to illustrate a of how a scalar field varies at a point, then adding and subtracting
surface. The right rendering was generated using lighting transfer this derivative from a scalar value sample provide an estimate of the
functions. Notice that since opacity is not modulated at the surface, values that would be acquired by sampling unit spacing along the
the thickness of the pink material is made evident by the amount it gradient direction. The accuracy of this estimation depends on the
occludes material behind it. degree that the scalar field varies linearly at that point, with material
boundaries often having scalar fields that behave the least linearly.
Our method, on the other hand, is able to take into account the ac-
6 D ISCUSSION tual distribution of scalar values along a material boundary directly
and is thus better suited to deal with non-linear variations in scalar
This paper presents a method for the manipulation of surface light- values at material transitions. The trade-off of using this informa-
ing parameters to enhance material boundaries of interest. The ma- tion is the additional computation of performing these added scalar
nipulation of surface shading at arbitrary scalar value transitions is value reads.
physically justifiable by the fact that reflections can exist at surfaces
with homogeneous opacity or density because of variations in the 7 F UTURE W ORK AND C ONCLUSION
index of refraction. However, the method can enhance the lighting
at surfaces in a manner that is not physically realistic. We believe In our current implementation we only use the scalar value pair
this is not a limitation of the work. Much of scientific visualization read along the gradient direction for 2D lighting transfer functions.
is far from photorealistic. For example, the assignment of color The center value could also be used as input for a three-dimensional
and opacity to a temperature or velocity field departs far from re- lighting transfer function. Using this third variable could add signif-
alism, but meets the goals of scientific visualization: the creation icant complexity to the user interface and require additional texture
of imagery that helps illustrate scientific phenomena. Similarly, we storage for a 3D transfer function. We would like to investigate the
believe the utility of the method we have developed lies not in nec- use of this additional variable while avoiding these potential limita-
essarily being more physically valid, but rather in giving scientists tions.
the freedom to enhance lighting in a manner that gives greater con- The focus of this work is to use lighting to illustrate surfaces
trol over the appearance of material boundaries. rather than opacity. The modulation of opacity can also provide
As is the case with all methods that make use of gradients in a helpful means of illustrating semitransparent shapes. A future
direct volume rendering, our method depends on the ability to com- area of study is how the multi-sample transfer function method pre-
pute gradients in a volume with reasonable accuracy. Our method sented here could be used not only for lighting, but also opacity.
is thus less effective for volumes with a high amount of noise. The The right of Figure 11 shows a preliminary result we obtained by
use of a higher-order smoothened gradient estimation mask, such applying our method for material classification to assign color and
as a Sobel Filter, helps in the presence of noise, but has the trade- opacity. In this example, the branches have a higher density than
off of potentially blurring the appearance of fine features of interest. the leaves, while the leaves have a higher density than air. Because
Our method is also less effective in near homogenous regions where of inherent blur in the acquisition of the data set, the regions sur-
gradient directions are not well defined. In these near homogenous rounding the branches have the same density as the leaves. With
regions, however, the scalar values read along the estimated gradi- the use of a conventional 1D transfer function, shown on the left,
ents tend to still fall within the same region yielding scalar values the regions surrounding the branches are misclassified as leaves, re-
similar to those which would be read if the gradients were well de- sulting in a moss-like appearance. By using a sample’s scalar value,
295
grant P200A980307. We would like to thank the VIZLAB of CAIP
at Rutgers University, SFB 382 of the German Research Coun-
cil, General Electric, General Electric Aircraft Engines, Evendale,
Ohio, and Stefan Roettger, VIS, University of Stuttgart for provid-
ing data sets. The authors would also like to thank ATI Technolo-
gies and NVIDIA Corporation.
R EFERENCES
[1] James Blinn. Light reflection functions for simulation of clouds and
dusty surfaces. In SIGGRAPH ’82 Conference Proceedings, pages
21–29, July 1982.
[2] Robert Drebin, Loren Carpenter, and Pat Hanrahan. Volume render-
ing. In SIGGRAPH ’88 Conference Proceedings, pages 65–74, August
Figure 11: The regions surrounding the branches have the same 1988.
intensity as the leaves and are incorrectly classified using a conven- [3] Helwig Hauser, Lukas Mroz, Gian-Italo Bischi, and M. Eduard
tional transfer function resulting in a layer of green surrounding the Groller. Two-level volume rendering- fusing MIP and DVR. In IEEE
branches as shown on the left. The right image shows the result Visualization 2000 Conference Proceedings, pages 211–218, 2000.
of applying our multi-sample transfer function method for the spec- [4] Henrik Wann Jensen and Per H. Christensen. Efficient simulation
ification of color and opacity. Notice that the voxels at transitions of light transport in scenes with participating media using photon
between wood and air are now classified as air or wood, and are no maps. In SIGGRAPH ’98 Conference Proceedings, pages 311–320,
longer green. July 1998.
[5] James T. Kajiya and Brian P. Von Herzen. Ray tracing volume den-
sities. In SIGGRAPH ’84 Conference Proceedings, pages 165–174,
and the two scalars read along the gradient direction as the domain July 1984.
of a 3D transfer function, a transfer function can be specified that [6] Gordon Kindlmann. Transfer functions in direct volume rendering:
Design, interface, interaction. In SIGGRAPH 2002 Course Notes,
takes into account the values on each side of a material boundary
2002.
to properly segment the leaves from the regions surrounding the
[7] Gordon Kindlmann and James W. Durkin. Semi-automatic generation
branches. of transfer functions for direct volume rendering. In IEEE Symposium
A major strength of the user interface presented in this paper on Volume Visualization, pages 79–86, 1998.
is its simplicity. The user can see the different material boundaries [8] Gordon Kindlmann and David Weinstein. Hue-balls and lit-tensors
that occur in the volume by looking at the line-based histogram, and for direct volume rendering of diffusion tensor fields. In Proceedings
can add illumination to a boundary by selecting scalar values that of IEEE Visualization ’99 Conference, pages 183–189, October 1999.
occur on each side of that boundary. The simplicity of this interface [9] Joe Kniss, Gordon Kindlmann, and Charles Hansen. Interactive vol-
yields some limitations. With the interface we describe, each mate- ume rendering using multi-dimensional transfer functions and direct
rial boundary is selected using a 2D transfer function that is separa- manipulation widgets. In Proceedings of IEEE Visualization 2001
ble. Although an arbitrary transfer function can be constructed by Conference, pages 255–262, October 2001.
combining multiple sets of widgets, it would be desirable to have [10] Joe Kniss, Simon Premoze, Charles Hansen, and David Ebert. A
a more direct means of providing this flexibility. In particular, one model for volume lighting and modeling. IEEE Transactions on Visu-
might want to enhance lighting based on the magnitude of the dif- alization and Computer Graphics, 9(2):150–162, April-June 2003.
ference between the two scalar samples, which would not be possi- [11] Wolgang Krueger. The application of transport theory to visualization
ble using a separable transfer function. Providing a high degree of of 3-D scalar data fields. Computers in Physics, pages 397–406, July-
control over the construction of the transfer function, while main- August 1991.
taining the level of intuitiveness available with our current user in- [12] Marc Levoy. Display of surfaces from volume data. IEEE Computer
Graphics and Applications, 8(3):29–37, May 1988.
terface, remains an important but difficult problem deserving future
[13] Nelson Max. Light diffusion through clouds and haze. Computer
study.
Vision, Graphics, and Image Processing, 33(3):280–292, 1986.
Traditional transfer function methods use opacity to enhance ma- [14] Nelson Max. Optical models for direct volume rendering. IEEE Trans-
terial boundaries of interest. This paper describes a method that in- actions on Visualization and Computer Graphics, 1(2):99–108, 1995.
stead gives a high degree of control over illumination to illustrate [15] Tomoyuki Nishita, Eihachiro Nakamae, and Yoshinori Dobashi. Dis-
surfaces, allowing opacity to be reserved for indicating thickness play of clouds and snow taking into account multiple anisotropic scat-
and occlusion. A novel user interface for visualizing and specify- tering and sky light. In SIGGRAPH ’96 Conference Proceedings,
ing these lighting transfer functions has also been presented. It is pages 379–386, August 1996.
our belief that the technique we have developed can be used for [16] Herke Jan Noordmans, Hans T.M. van der Voort, and Arnold W.M.
the creation of more visually expressive imagery, to better meet the Smeulders. Spectral volume rendering. IEEE Transactions on Visual-
needs of scientists. ization and Computer Graphics, 6(3):196–207, 2000.
[17] David Rodgman and Min Chen. Refraction in discrete ray tracing. In
Proceedings of Volume Graphics 2001, pages 3–17, 2001.
8 ACKNOWLEDGMENTS [18] Holly E. Rushmeier and Kenneth E. Torrance. The zonal method for
calculating light intensities in the presence of a participating medium.
This work has been sponsored in part by the U.S. National Science In SIGGRAPH ’87 Conference Proceedings, pages 293–302, July
Foundation under contracts ACI 9983641 (PECASE award), ACI 1987.
0325934 (ITR), and ACI 0222991; the U.S. Department of Energy [19] Allen Van Gelder and Kwansik Kim. Direct volume rendering with
under Memorandum Agreements No. DE-FC02-01ER41202 (Sci- shading via three-dimension textures. In ACM Symposium on Volume
DAC program), and No. B523578 (ASCI VIEWS); the LANL/UC Visualizatrion ’96 Conference Proceedings, pages 23–30, 1996.
CARE program; the National Institute of Health through the Hu-
man Brain Project; and a United States Department of Education
Government Assistance in Areas of National Need (DOE-GAANN)
296