VIS Modules 06 Direct Volume Rendering
VIS Modules 06 Direct Volume Rendering
The idea of Direct Volume Rendering (DVR) is to get a 3D representation of the volume
data directly.
The data is considered to represent a semi-transparent light-emitting medium. Therefore
also gaseous phenomena can be simulated. The approaches in DVR are based on the
laws of physics (emission, absorption, scattering). The volume data is used as a whole
(we can look inside the volume and see all interior structures).
Backward methods
The image shows the basic idea of backward methods. A ray from the view point goes through a rectangle which represents the resulting image into the scene.
Forward methods use object space/object order algorithms. These algorithms are
performed voxel by voxel and the cells are projected onto the image. Examples for this
technique are Slicing, shear-warp and splatting.
Forward methods
The picture shows the basic idea of forward methods. A voxel influences several pixels in the resulting image.
1 Ray Casting
Ray Casting is similar to ray tracing in surface-based computer graphics. In volume
rendering we only deal with primary rays, i.e. no secondary rays for shadows, reflection or
refraction are considered. Ray Casting is a natural image order technique.
Since we have no surfaces in DVR we have to carefully step through the volume. A ray is
cast into the volume, sampling the volume at certain intervals. The sampling intervals are
usually equidistant, but they don't have to be (e.g. importance sampling). At each
sampling location, a sample is interpolated/reconstructed from the voxel grid. Popular filter
are nearest neighbor (box), trilinear, or more sophisticated (Gaussian, cubic spline).
Volumetric ray integration:
The rays are casted through the volume. Color and opacity are accumulated along each
ray (compositing).
Ray Casting
The image shows the procedure of Ray casting. A ray is traced through the volume data, at each step the color and opacity values are updated. For interpolating the color value at
a certain point a interpolation kernel is used.
Front-to-back-strategy
The illustration shows the front-to-back-strategy. The color value is computed from the old color/opacity and the values of the sample.
Original Space-Leaping
In the original space-leaping approach the empty space of the volume is efficiently
traversed or even completely skipped. There are several methods for realizing this idea.
Hierarchical spatial data structure: The volume is subdivided into uniform regions which
are represented as nodes in a hierarchical data structure (e.g. octree). The empty space
can be skipped by traversing the data structure. This technique can be optimized by
storing the uniform information of the octree in the empty space of the 3D volume grid.
This data structure is called flat pyramid.
Flat pyramid
The image shows an example for a flat pyramid. In fact it is a quadtree recursively subdevided down until pixel size.
The gray squares are the voxels which contain volume data. The white squares denote
empty voxels. In these voxels the level of the octree they belong to is stored. When a ray
encounters a voxel with uniformity information it jumps to the first voxel outside this
uniform region. Voxels with data are treated as usual. The voxels visited by a ray are
marked with black dots in the illustration. In this example the rays terminate after two data
voxels because a certain opacity level is reached.
Bounding boxes around objects: For volumes which contain only few objects bounding
boxes can be used for space-leaping. The PARC (Polygon assisted ray casting) method
allows to build a convex polyhedron around the object.
Adaptive ray traversal: The idea of this approach is to assign all empty voxels
neighboring a data voxel a so called "vicinity flag". The empty space is traversed rapidly by
an algorithm with low precision and when a vicinity voxel occurs it is switched to a more
accurate algorithm.
Adaptive ray traversal
The illustration shows an example for adaptive ray traversal
The light gray squares denote voxels with vicinity flag turned on. The fast traversal
algorithm is shown with the dashed gray lines, the precise algorithm with the solid black
lines.
Proximity clouds:
This method is an extension of the approach described above. In a pre-processing step
the distance to the nearest data voxel is computed for each empty voxel.
Proximity clouds
The picture shows an example for proximity clouds.
In the illustration above voxels with the same distance value are shown by different
shades of blue. The sample points are represented by the black dots. When a ray
encounters a proximity voxel with value n it can skip the next n voxels because there are
no data voxels in between.
Homogeneity-acceleration: Homogeneous regions are approximated with fewer sample
points. Therefore the mean values and variance are stored in the octree.
2.3 Exploiting Temporal Coherence in Volume
Animations
For each pixel the coordinates of the first non-transparent voxel which is hit by the ray is
stored in the so called C-buffer (Coordinates-buffer). The coordinates are computed after
rendering the first frame and then updated continuously. This data can be used to
determine where to start tracing the ray. This is illustrated in the image below.
The C-buffer has to be updated correctly when the world space is rotated. Therefore the
C-buffer coordinates are transformed by the rotation, i.e. the voxel coordinates [x,y,z]
stored in the C-buffer[i,j] are mapped to C-buffer[i',j']. Now two problems may occur. First it
is possible that a C-buffer entry remains empty. In this case the ray starts at the volume
boundary as in the normal ray casting approach. The second problem is more
complicated. It is possible that several coordinates are mapped to the same C-buffer
entry. An example:
In the example above the coordinates of the voxel [ x 0, y 0, z 0 ] are stored in C-Buffer[i,j]
and those of voxel [ x 1, y 1, z 1 ] in C-Buffer[i,k] with k j . Now the world space is rotated
around the X-axis. Therefore the C-Buffer entries remain both in the column i of the buffer,
but the row order of the two projected voxels has changed, i.e. j ' k ' . This is the
indicator for voxels to be potentially hidden after a rotation. These entries possibly have to
be removed.
This image illustrates a stack of 2D textured slices and the proxy geometry that could be
achieved with it.
This picture shows a cube drawn only with outlines. Inside the cube there are several slices of a human head blended over each other from back to front.
The texture mapped 2D slices are object-aligned, because they get computed in object
space parallel to the x-, y-, z-axis, unappreciated from which angle the viewer looks at the
object. Due to correct visualization of the object from each possible viewing angle three
stacks of 2D texture slices have to be computed. By doing this one can switch between
the different stacks in case the viewing angle is close to 45 degrees to the slice normal.
Otherwise the viewer would look on the slice edges and thus see through the volume. By
projecting the voxels of the slice plane on pixels of the viewing plane, a pixel can lie in-
between voxels, so its value can be found by bilinear interpolation.
Example for a teddy bear once rendered with axis aligned slices and once rendered with view aligned slices.
Problems:
• Euclidean distance between slices along a light ray depends on viewing parameters
• sampling distance depends on viewing direction
• apparent brightness changes if opacity is not corrected
• Artifacts when stack is viewed close to 45 degrees
Render components
• Data setup
• Volume data representation
• Transfer function representation
• Data download
• Volume textures
• Transfer function tables
• Per-volume setup (once per frame)
• Blending mode configuration
• Texture unit configuration (3D)
• (Fragment shader configuration)
• Per-slice setup
• Texture unit configuration (2D)
• Generate fragments
• Proxy geometry rendering
• RGBA
• Pre-classified (pre-shaded) colored volume rendering
• Transfer function is already applied to scalar data
• Paletted texture
• Index into color and opacity table (= palette)
• Index size = byte
• Index is identical to scalar value
• Pre-classified (pre-shaded) colored volume rendering
• Transfer function applied to scalar data during runtime
• Simple and fast change of transfer functions
• OpenGL code for paletted 3D texture
glTexImage3D ( GL_TEXURE_3D, 0, GL_COLOR_INDEX8_EXT,
size_x, size_y, GL_COLOR_INDEX,
GL_UNSIGNED_BYTE, voldata);
Compositing:
• Works on fragments
• Per-fragment operations
• After rasterization
• Blending of fragments via over operator
• OpenGL code for over operator
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Generate fragments:
• Render proxy geometry
• Slice
• Simple implementation: quadrilateral
• More sophisticated: triangulated intersection surface between slice plane and boundary
of the volume data set
4 Shear-Warp Factorization
As described in the last chapter, algorithms that use spatial data structures can be divided
into two categories according to the order in which the data structures are traversed:
image-order or object-order.
Image-order algorithms operate by casting rays from each image pixel and processing the
voxels along each ray (e.g. Ray Casting). This processing order has the disadvantage that
the spatial data structure must be traversed once for every ray, resulting in redundant
computation (e.g. Multiple descents of an octree). In contrast, object-order algorithms
operate by splatting voxels onto the image while streaming through the volume data in
storage order. We will deal with this technique more deeply in this chapter. However, this
processing order makes it difficult to implement early ray termination, an effective
optimization in ray-casting algorithms.
The algorithm is software-based and works on the same slice-based technique as texture
based volume rendering. Therefore it also needs 3 stacks of the volume along 3 principal
axes. They are computed in object-space. The general goal is to make viewing rays
parallel to each other and perpendicular to the image, which is achieved by a simple
shear.
Shear along the volume slices, composite the resampled slices to an intermediate image
and warp it to image space.
In this illustration, several parallel rays are cast through the volume slices in any angle. In the second step, all rays are sheared at the first slice, so they stand perpendicular on it.
After shearing slices, you have to resample them. Then the sample ray hits voxel center
after voxel center and no interpolation is needed anymore.
Most of the work in volume rendering has focused on parallel projections. However,
perspective projections provide additional cues for resolving depth ambiguities.
Perspective projections present a problem because the viewing rays diverge so it is
difficult to sample the volume uniformly.
For a perspective projection each image slice has to be scaled after shearing and
warping.
The Algorithm in a nutshell:
• Shear (3D)
• Project (3D 2D)
• Warp (2D)
The Projection in sheared object space has several geometric properties that simplify the
compositing step of the algorithm:
1. Scan lines of pixels in the intermediate image are parallel to scan lines of voxels in the
volume data.
2. All voxels in a given voxel slice are scaled by the same factor. In different slices by
different factors.
3. Parallel projections only: Every voxel slice has the same scale factor. The scale factor
for parallel projections can be chosen arbitrary. But we choose a unity scale factor so
that for a given voxel scan line there is a one-to-one mapping between voxels and
intermediate image pixels.
Offsets stored with opaque pixels in the intermediate image allow occluded voxels to be
skipped efficiently.
This picture shows a row of 8 pixels. Pixel 1, 4 and 8 are non-opque (white), the others are opaque (black). Each opaque pixel has a pointer stored, that points to the next non-
opaque pixel.
By combining the both ideas of volume data and intermediate image optimization, we can
exploit coherencies and make the algorithm very fast. As first property parallel scan lines
are used for pixels and voxels. That means that voxel scan lines in sheared volume are
aligned with pixel scan lines in intermediate image and both can be traversed in scan line
order simultaneously. By using the run-length encoding of the voxel data to skip voxels
which are transparent and the run-length encoding of the image to skip voxels which are
occluded, we perform work only for voxels which are both non-transparent and visible.
Resampling and compositing are performed by streaming through both the voxels and the
intermediate image in scanline order, skipping over voxels which are transparent and
pixels which are opaque.
Coherence in voxel space:
• Each slice of the volume is only translated
• Fixed weights for bilinear interpolation within voxel slices
• Computation of weights only once per frame
Final warping:
• Works on composited intermediate image
• Warp: affine image warper with bilinear filter
• Often done in hardware: render a quadrilateral with intermediate 2D image being
attached as 2D texture
Parallel projection:
• Efficient reconstruction
• Lookup table for shading
• Lookup table for opacity correction (thickness)
• Three RLE of the actual volume (in x, y, z)
Perspective projection:
• Similar to parallel projection
• Viewing rays diverge in infinity
• Difference: voxels need to be scaled
• Hence more then two voxel scan lines needed for one image scan line
• Many-to-one mapping from voxels to pixels makes it slower than parallel projection.
5 Splatting
Splatting
Splatting is a volume rendering method that distributes volume data values across a
region on the rendered image in terms of a distribution function.
Splatting is an object-order method described by [Westover, 1990], this method
accumulates data points by "throwing" kernels for each voxel to the drawing plane. The
footprints on the drawing plane represent the visualization. The kernel shape and size is
critical for the quality of the result. The accumulation, or compositing process can be done
back-to-front, which guarantees correct visibility. Front-to-back compositing is faster,
because the process can be stopped when pixels are fully opaque. The original algorithm
is fast but poor in quality, however since the first publication many improvements have
been implemented. Note that good splatting results need as much CPU computing time as
ray casting.
The basic idea of splatting is to project each sample (voxel) from the volume onto the
image plane, which can be understood in such a way that each voxel is thrown on a plane
and leaves it's footprint on it. For each thrown voxel we have to compute, which part of the
image plane could be included.
You can think of the volume (cloud) as each of the voxels is a measurement of a
continuous scalar function. Ideally we want to reconstruct the continuous volume and not
the discrete values. That's why we use the interpolation kernel w which should be
spherically symmetric, because we don't want different results when looking from different
directions. A 3D Gaussian convolution kernel would be a good choice.
The reconstruction can be done through:
• Draw each voxel as a cloud of points (footprint) that spreads the voxel contribution
across multiple pixels
• Footprint: splatted (integrated) kernel
• Approximate the 3D kernel h(x,y,z) extent by a sphere
2D convolution kernel.
If the 3D spheric kernel gets projected on a 2D plane by a pespective projection it leaves
an elliptical footprint.
The choice of kernel can affect the quality of the image as you can see in the picture
below. Examples are cone, Gaussian, sinc, and bilinear function.
Voxel kernels are added within each 2D sheet of the volume. The weighted footprints of
the sheets are composited in front-to-back order. For projection of all sheets in the sheet-
buffer we use volume slices most parallel to the image plane (analogously to stack of
slices).
Which stack of volume slices is used for projection onto the image plane depends of the
viewing angle.
If the viewing angle is about 30 degrees we choose the stack aligned at the x-axis, if the
angle is about 70 degrees we choose the stack aligned at the y-axis.
Core algorithm for splatting:
Volume:
• Represented by voxels
• Slicing
Image plane:
• Sheet buffer
• Compositing buffer
Add voxel kernels within first sheet to the sheet-buffer.
Disadvantages:
• Blurry images for zoomed views, because Gaussian cuts away high frequencies.
• High fill-rate for zoomed splats.
• Reconstruction and integration to be performed on a per-splat basis.
• Dilemma when splats overlap
Assumptions:
• Based on a physical model for radiation
• Geometrical optics
Neglect:
• Diffraction
• Interference
• Wave-character
• Polarization
Based on energy conservation on microscopic level (not on photon and electron level).
• Basic quantity of light is the radiance, sometimes also called specific intensity I .
Photons reflected on a surface.
Given is a round surface with surface area dA , a reflection vector n with angle to the surface normal and the solid
angle d .
The equation of radiative energy tells us, if we have a surface element and radiation with
specific direction n with solid angle, what the amount of energy moving through this
area will be. Therefore the equation looks like:
E= I x , n , v cos dA d d dt with
There exist 4 responsible effects if there is a difference between incoming and outgoing
radiance at a single position:
• Absorption: specific intensity comes in, less intensity goes out.
• Emission: less energy comes in, but more energy goes out.
• Outscattering: energy scatters out in all directions and can scatter in at another part.
• Inscattering: energy scatters in from other parts.
E ab = absorption
x , n , = absorption coefficient
= scalar value or function
Notice that no cosine term appears in this expression. This is because the absorbing
volume does not depend on the incident angle. The absorption coefficient generally is a
function of x , n and z . The absorption coefficient is measured in units of m−1 .
1
The term is also known as the mean free path of photons of frequency in
material.
E em= x , n , ds dA d d dt
In an analogous way as we did for absobtion, we break the total emission coefficient into
two parts:
• Thermal part or source term q x , n ,
• Scattering part j x , n ,
Scattering:
In order to take into account the angle dependence of scattering, we introduce a phase
function p x , n , n ' , , ' . The amount of radiant energy scattered from frequency
to frequency ' and from direction n to direction n ' , is given by:
1
E scat = I ds dA d d dt ⋅ p x , n , n ' , , ' d ' d ' with
4
I = inscattered amount
1
It should be mentioned that there is no deeper meaning behind the factor . It simply
4
cancels the factor 4 resulting from integrating a unity phase function over the sphere.
Further in a scattering process a photon interacts with a scattering center and emerges
from the event moving in a different direction, in general with a different frequency, too. If
frequency doesn't change, one speaks of elastic scattering.
Scattering can appear, when photons hit a surface (rendering) or if they enter a
participating medium (volume rendering).
This picture shows an example of surface scattering. Incoming energy from one specific direction scatteres on the surface and gets distributed in several directions. The next
example shows volume scattering inside a medium. There the incoming energy gets scattered several times inside the medium, which always results in a change of the reflection
angle. This means the energy can enter the medium at any point and can leave it at any other possible point, depending on how often it scatters.
{I x , n , − I xdx , n , }dA d d dt (the difference can be one of the four effects)
={− x , n , I x , n , x , n , }ds dA d d dt with
If we write dx=n ds there follows the time-independent equation of transfer. Note that
the emission coefficient in general contains a scattering part and thus an integral over I
:
n⋅∇ I =− I with
This means that radiance along a ray in direction n changes due to absorption plus the
emission. In vacuum for example − I =0 and =0 , so we have no absorption and
no emission which means the radiance does not change.
1
n⋅∇ I =− I q
4
∬ x , n ' , ' p x , n ' , n , ' , I x , n ' , ' d ' d ' with
= true absorption
= scattering
q = true emission
∬ = we integrate over all incoming scattering photons
1
n⋅∇ I =− I q
4
∫ x , n ' p x , n ' , n I x , n ' d '
E x , n , = emission
∬ k x , n ' , n , ' , = surface scattering kernel
I inc x , n ' , ' = incoming radiation
yields
∂ I x , n , =− x , n , I x , n , x , n ,
∂s
Notice that the operator n⋅∇ is the directional derivative along a line, x= psn with
p being some arbitrary reference point.
s2
1
Recall that is the mean free path of a photon with frequency . Therefore we can
interpret optical depth as the number of mean free paths between two points x 1 and
x 2 . Further optical depth serves as integrating factor e x , x for equation of transfer 0
∂ I x , n , ⋅e x 0 , x
= x , n ,⋅e
x 0 , x
∂s
Making use of the fact that optical depth can be decomposed into
x 0 , x = x 0 , x ' x ' , x the final integral form of the equation of transfer can be
written as follows:
s
− x 0 , x
I x 0 , n , e = at any point of the volume there can be emission.
s
∫ x ' , n , e− x ' , x = if there is emission at a point in the volume it will be absorbed on
s0
its further way.
This equation can be viewed as the general formal solution of the time-independent
equation of transfer. It states that the intensity of radiation traveling along n at point
x is the sum of photons emitted from all points along the line segment x ' = ps ' n ,
attenuated by the integrated absorptivity of the intervening material, plus an attenuated
contribution from photons entering the boundary surface when it is pierced by that line
segment. Generally the emission coefficient will contain I itself.
Vacuum condition is an important special case of the transport equation, because there is
no absorption, emission or scattering at all, except on surfaces. Further the frequency
dependence can be ignored and the equation of transfer (inside the volume) is greatly
simplified:
I x , n=I x 0 , n
Rays incident on surface at x are traced back to some other surface element at x' :
Rewriting the element of solid angle d ' into a more familiar form
cos '
d ' = dA '
∣x− x '∣
leads to the standard form via BRDF (bidirectional reflection distribution function):
k x , n ' , n= f r x , n ' , ncos i
If we ignore scattering in volume rendering equation we are left with the so called
emission-absorption model or density-emitter model. One can imagine the volume to
be filled with small light emitting particles. Rather than being modeled individually, these
particles are described by some density function.
This leads to some Simplifications:
• No scattering
• Emission coefficient consists of source term only: =q (true emission)
• Absorption coefficient consists of true absorption only: =
• No mixing between frequencies (no inelastic effects), so we can ignore any
This equation is much simpler because we are only interested in the radiance along the
ray without any scattering. In order to solve this equation one has to discretize along the
ray. There is no need to use equidistant steps, though this is the most common used
procedure. If we divide the range of integration into n intervals , then the specific
intensity at position s k is related to the specific intensity at position s k −1 by the identity.
So the discretization of volume rendering equation can be written:
sk
∫ q se ds with
−s k −1 , s k −s , s k
I s k = I s k −1 e
s k −1
−s k −1 , s k
e = attenuation between s k −1 and s k
q s = emission term with q , the color which is emitted
For abbreviation we define two parts:
sk
b k = ∫ q se
−s , sk
• Emission part:
s k−1
n n
I s n =I s n−1 n b n −∑ b k ∏ j =I S n−1 ⋅1−n b n
k =0 j=k 1
This formula describes the entire physics of radiation transport, with the sum of all things
that get emitted and the product of all transparencies (this is the over operator from ray
casting). More exact the quantity k is called the transparency of the material between
s k −1 and s k . Transparency is a number between 0 and 1. An infinitely optical depth
s k −1 , s k corresponds to a vanishing transparency k =0 . On the other hand a
vanishing optical depth corresponds to full transparency k =1 . Alternatively one often
uses opacity, defined as k =1−k .
7 Compositing Schemes
As a matter of course there exist variations of composition schemes other then the over
operator:
• First
• Average
• Maximum intensity projection
• Accumulate
Average or linear compositing produces basically an X-ray picture and takes no opacity
into account.
Maximum Intensity Projection (MIP) uses the greatest value for rendering. It is often used
for magnetic resonance angiograms or to extract vessel structures from an CT or MRT
scan.
Accumulate compositing is similar to the over operator and the emission-absorption model
where we have early ray termination. Intensity colors are accumulated until enough
opacity is collected. This makes transparent layers visible (see volume classification).