0% found this document useful (0 votes)
16 views34 pages

VIS Modules 06 Direct Volume Rendering

Direct Volume Rendering (DVR) provides a 3D representation of volume data, simulating semi-transparent light-emitting media, using either backward or forward methods. Techniques like Ray Casting and texture-based volume rendering allow for efficient rendering by utilizing algorithms that account for color, opacity, and spatial coherence. Acceleration techniques such as early ray termination and adaptive sampling improve performance by optimizing the traversal of volume data.

Uploaded by

c988473
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views34 pages

VIS Modules 06 Direct Volume Rendering

Direct Volume Rendering (DVR) provides a 3D representation of volume data, simulating semi-transparent light-emitting media, using either backward or forward methods. Techniques like Ray Casting and texture-based volume rendering allow for efficient rendering by utilizing algorithms that account for color, opacity, and spatial coherence. Acceleration techniques such as early ray termination and adaptive sampling improve performance by optimizing the traversal of volume data.

Uploaded by

c988473
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Direct Volume Rendering

The idea of Direct Volume Rendering (DVR) is to get a 3D representation of the volume
data directly.
The data is considered to represent a semi-transparent light-emitting medium. Therefore
also gaseous phenomena can be simulated. The approaches in DVR are based on the
laws of physics (emission, absorption, scattering). The volume data is used as a whole
(we can look inside the volume and see all interior structures).

In DVR either backward or forward methods can be used.


Backward methods use image space/image order algorithms. They are performed pixel by
pixel. An example is Ray Casting which well be discussed in detail below.

Backward methods
The image shows the basic idea of backward methods. A ray from the view point goes through a rectangle which represents the resulting image into the scene.

Forward methods use object space/object order algorithms. These algorithms are
performed voxel by voxel and the cells are projected onto the image. Examples for this
technique are Slicing, shear-warp and splatting.

Forward methods
The picture shows the basic idea of forward methods. A voxel influences several pixels in the resulting image.

1 Ray Casting
Ray Casting is similar to ray tracing in surface-based computer graphics. In volume
rendering we only deal with primary rays, i.e. no secondary rays for shadows, reflection or
refraction are considered. Ray Casting is a natural image order technique.
Since we have no surfaces in DVR we have to carefully step through the volume. A ray is
cast into the volume, sampling the volume at certain intervals. The sampling intervals are
usually equidistant, but they don't have to be (e.g. importance sampling). At each
sampling location, a sample is interpolated/reconstructed from the voxel grid. Popular filter
are nearest neighbor (box), trilinear, or more sophisticated (Gaussian, cubic spline).
Volumetric ray integration:
The rays are casted through the volume. Color and opacity are accumulated along each
ray (compositing).

Ray Casting

The image shows the procedure of Ray casting. A ray is traced through the volume data, at each step the color and opacity values are updated. For interpolating the color value at
a certain point a interpolation kernel is used.

How is color and opacity determined at each integration step ?


• Opacity and (emissive) color in each cell according to classification
• Additional color due to external lighting: according to volumetric shading (e.g. Blinn-
Phong)
How can the compositing of semi-transparent voxels be done ?
• Physical model: emissive gas with absorption
• Approximation: density-emitter-model (no scattering)
out in
• Over-operator was introduced by Porter [Porter-1984-CDI]: C =1−C  C ,
C i in =C i−1out
Over operator
The picture illustrates the back-to-front-strategy. The color value is computed from the old color value and the color/opacity of the sample.

A front-to-back-strategy is also possible: C out =C in 1−in  C , out =in 1−in 


This approach causes the need to maintain  .

Front-to-back-strategy
The illustration shows the front-to-back-strategy. The color value is computed from the old color/opacity and the values of the sample.

There are several traversal strategies:


• Front-to-back (most often used in ray casting)
• Back-to-front (e.g. in texture-based volume rendering)
• Discrete (Bresenham) or continuous (DDA) traversal of cells

2 Acceleration Techniques for Ray


Casting
The problem with ray casting is the fact that it is very time consuming. The basic idea for
accelerating the rendering process is to neglect "irrelevant" information and to exploit
coherence.

2.1 Early Ray Termination


Colors from far away regions do not contribute if the accumulated opacity is too high. The
idea of the early ray termination approach is to stop the traversal if the contribution of a
sample becomes irrelevant. The user can set an opacity level for the termination of a ray.
The color value is computed by front-to-back-compositing.
2.2 Space-Leaping
Space-leaping means the fast traversal of regions with homogeneous structure. These
regions can be walked through rapidly without loosing relevant information. There are
several techniques for this principle.

Original Space-Leaping
In the original space-leaping approach the empty space of the volume is efficiently
traversed or even completely skipped. There are several methods for realizing this idea.
Hierarchical spatial data structure: The volume is subdivided into uniform regions which
are represented as nodes in a hierarchical data structure (e.g. octree). The empty space
can be skipped by traversing the data structure. This technique can be optimized by
storing the uniform information of the octree in the empty space of the 3D volume grid.
This data structure is called flat pyramid.

Flat pyramid
The image shows an example for a flat pyramid. In fact it is a quadtree recursively subdevided down until pixel size.

The gray squares are the voxels which contain volume data. The white squares denote
empty voxels. In these voxels the level of the octree they belong to is stored. When a ray
encounters a voxel with uniformity information it jumps to the first voxel outside this
uniform region. Voxels with data are treated as usual. The voxels visited by a ray are
marked with black dots in the illustration. In this example the rays terminate after two data
voxels because a certain opacity level is reached.

Bounding boxes around objects: For volumes which contain only few objects bounding
boxes can be used for space-leaping. The PARC (Polygon assisted ray casting) method
allows to build a convex polyhedron around the object.

Adaptive ray traversal: The idea of this approach is to assign all empty voxels
neighboring a data voxel a so called "vicinity flag". The empty space is traversed rapidly by
an algorithm with low precision and when a vicinity voxel occurs it is switched to a more
accurate algorithm.
Adaptive ray traversal
The illustration shows an example for adaptive ray traversal

The light gray squares denote voxels with vicinity flag turned on. The fast traversal
algorithm is shown with the dashed gray lines, the precise algorithm with the solid black
lines.
Proximity clouds:
This method is an extension of the approach described above. In a pre-processing step
the distance to the nearest data voxel is computed for each empty voxel.

Proximity clouds
The picture shows an example for proximity clouds.

In the illustration above voxels with the same distance value are shown by different
shades of blue. The sample points are represented by the black dots. When a ray
encounters a proximity voxel with value n it can skip the next n voxels because there are
no data voxels in between.
Homogeneity-acceleration: Homogeneous regions are approximated with fewer sample
points. Therefore the mean values and variance are stored in the octree.
2.3 Exploiting Temporal Coherence in Volume
Animations
For each pixel the coordinates of the first non-transparent voxel which is hit by the ray is
stored in the so called C-buffer (Coordinates-buffer). The coordinates are computed after
rendering the first frame and then updated continuously. This data can be used to
determine where to start tracing the ray. This is illustrated in the image below.

Using the C-buffer


The image shows an example using the C-buffer.

The C-buffer has to be updated correctly when the world space is rotated. Therefore the
C-buffer coordinates are transformed by the rotation, i.e. the voxel coordinates [x,y,z]
stored in the C-buffer[i,j] are mapped to C-buffer[i',j']. Now two problems may occur. First it
is possible that a C-buffer entry remains empty. In this case the ray starts at the volume
boundary as in the normal ray casting approach. The second problem is more
complicated. It is possible that several coordinates are mapped to the same C-buffer
entry. An example:

Rotation of the world space.


The picture shows that after a rotation of the world space points may be hidden.

In the example above the coordinates of the voxel [ x 0, y 0, z 0 ] are stored in C-Buffer[i,j]
and those of voxel [ x 1, y 1, z 1 ] in C-Buffer[i,k] with k  j . Now the world space is rotated
around the X-axis. Therefore the C-Buffer entries remain both in the column i of the buffer,
but the row order of the two projected voxels has changed, i.e. j ' k ' . This is the
indicator for voxels to be potentially hidden after a rotation. These entries possibly have to
be removed.

2.4 Template-Based Volume Viewing


The Template-Based Volume Viewing approach by Yagel [Yagel-1992-DRT] stores the
traversal steps through the volume as a ray template to accelerate the rendering process.
The template is generated by 3D DDA algorithm and can be used for parallel rays
(orthographic projection). The rays have to be traced from a base plane which is oriented
parallel to the volume.

Template based volume viewing


The illustration shows an example for template-based volume viewing.

2.5 Adaptive screen sampling


In some applications, rendering is performed by adaptively switching from one rendering
method or sampling rate to another. Adaptive screen sampling introduced by Levoy
[Levoy-1990-HRT] is well known in traditional ray tracing, where the number of rays
emitted from a given pixel is adapted to the color change in a subset of pixels in a small
neighborhood. This means that in areas with high value gradients additional rays are
traced. One problem is that the rays occasionally miss the surface, in that case missing
values are interpolated.

Adaptive screen sampling.


In this example we have an area of change from black to white, and therefore high value gradients. In this case adaptive screen sampling switches to a higher sampling rate to
replace interpolated values with exact color values resulting from the second sampling step.
3 Texture-Based Volume Rendering
Texture-based volume rendering is an approach to generate an image viewed with
different camera parameters using scene information extracted from a set of reference
images. The rendering speed of this approach depends only on image size instead of
scene complexity, and complex hand-made geometric models are not required. Further
these techniques take advantage of functionality implemented on graphics hardware like
Rasterization, Texturing and Blending.
Texture-based volume rendering is an object-space technique, since all calculations are
done in object-space. This means that the rendering is accomplished by projecting each
element onto the viewing plane such as to approximate the visual stimulus of viewing the
element with regard to the chosen optical model. Two principal methods have been shown
to be very effective in performing this task: slicing and cell projection.
We will only discuss the former method: Each element gets projected on a slice-by-slice
basis. Therefore, the cross sections of each element and the current slicing plane are
computed.

Texture-based volume rendering.


At each intersection point with one of the element edges (red points) the color value is interpolated
from the pre-shaded samples at cell vertices. The cross sections are then rendered by letting the graphics hardware resample the color values onto the discrete pixel buffer during
rasterization.

Example of a scene description passing through the rendering pipeline.


The computation of the sectional polygons can either be done by taking advantage of
dedicated graphics hardware providing per-fragment tests and operations.
The computed slice has only a proxy geometry because there are no volumetric primitives
in graphics hardware. By sending the computed vertices of each slice down the rendering
pipeline, geometric primitives are generated that get fragmented in the rasterization step
to finally result in a pixel based texture map. All texture-mapped slices are stored in a
stack and are blended back-to-front onto the image plane, which results in a semi-
transparent view of the volume.

This image illustrates a stack of 2D textured slices and the proxy geometry that could be
achieved with it.
This picture shows a cube drawn only with outlines. Inside the cube there are several slices of a human head blended over each other from back to front.

The texture mapped 2D slices are object-aligned, because they get computed in object
space parallel to the x-, y-, z-axis, unappreciated from which angle the viewer looks at the
object. Due to correct visualization of the object from each possible viewing angle three
stacks of 2D texture slices have to be computed. By doing this one can switch between
the different stacks in case the viewing angle is close to 45 degrees to the slice normal.
Otherwise the viewer would look on the slice edges and thus see through the volume. By
projecting the voxels of the slice plane on pixels of the viewing plane, a pixel can lie in-
between voxels, so its value can be found by bilinear interpolation.

Axis aligned 2D textured slices.


The slices inside the cube are alligned at either the x-, y-, or z-axis.

Another method is to compute view-aligned 3D textured slices. In contrast to the object


aligned method, these slices are arranged with their surface normals in direction of the
viewer. Here we only have to compute one stack of 3D textures, but for projection trilinear
interpolation is needed to calculate values in-between. Further, if the object is rotated the
slices have to be recomputed for each frame.
View aligned 3D textured slices .
The slices in this cube are not aligned at an axis but at the viewpoint. This leads to polyeders that lie parallel to each other.

Example for a teddy bear once rendered with axis aligned slices and once rendered with view aligned slices.

Positive attributes of 2D textures:


• This method works on older graphics hardware which does not support 3D textures
• Only bilinear interpolation within slices, no trilinear interpolation
• fast
• problems with image quality

Problems:
• Euclidean distance between slices along a light ray depends on viewing parameters
• sampling distance depends on viewing direction
• apparent brightness changes if opacity is not corrected
• Artifacts when stack is viewed close to 45 degrees

Change of brightness in 2D texture-based rendering.


A human head rendered 2 times with axis aligned slices from different viewpoints. In the second image the viewpoint is changed in a way, so that more slices lie on top of each
other, this leads to a lucency of the whole head.

Artifacts when stack is viewed close to 45 degrees.


The viewpoint points directly on one lateral edge of the cube, which results in visible artefacts. The slices are axis aligned.

Positive attributes of 3D textures:


• Needs graphics hardware support for 3D textures
• Trilinear interpolation within volume
• slower
• good image quality
• Constant Euclidean distance between slices along a light ray
• No artifacts due to inappropriate viewing angles

Render components
• Data setup
• Volume data representation
• Transfer function representation
• Data download
• Volume textures
• Transfer function tables
• Per-volume setup (once per frame)
• Blending mode configuration
• Texture unit configuration (3D)
• (Fragment shader configuration)
• Per-slice setup
• Texture unit configuration (2D)
• Generate fragments
• Proxy geometry rendering

Representation of volume data by textures:


• Stack of 2D textures
• One 3D texture

Typical choices for texture format:


• Luminance and alpha
• Pre-classified (pre-shaded) gray-scale volume rendering
• Transfer function is already applied to scalar data
• Change of transfer function requires complete redefinition of texture data

• RGBA
• Pre-classified (pre-shaded) colored volume rendering
• Transfer function is already applied to scalar data

• Paletted texture
• Index into color and opacity table (= palette)
• Index size = byte
• Index is identical to scalar value
• Pre-classified (pre-shaded) colored volume rendering
• Transfer function applied to scalar data during runtime
• Simple and fast change of transfer functions
• OpenGL code for paletted 3D texture
glTexImage3D ( GL_TEXURE_3D, 0, GL_COLOR_INDEX8_EXT,
size_x, size_y, GL_COLOR_INDEX,
GL_UNSIGNED_BYTE, voldata);

• Representation of transfer function:


• For paletted texture only
• 1D transfer function texture = lookup table
• OpenGL extensions required
• OpenGL code
glColorTableEXT (GL_SHARED_TEXTURE_PALETTE_EXT,
GL_RGBA8, 256*4, GL_RGBA,
GL_UNSIGNED_BYTE, palette);

Compositing:
• Works on fragments
• Per-fragment operations
• After rasterization
• Blending of fragments via over operator
• OpenGL code for over operator
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

Generate fragments:
• Render proxy geometry
• Slice
• Simple implementation: quadrilateral
• More sophisticated: triangulated intersection surface between slice plane and boundary
of the volume data set

Advantages of texture-based rendering:


• Supported by consumer graphics hardware
• Fast for moderately sized data sets
• Interactive explorations
• Surface-based and volumetric representations can easily be combined
• mixture with opaque geometries

Disadvantages of texture-based rendering:


• Limited by texture memory
• Solution: bricking at the cost of additional texture downloads to the graphics
board
• Brute force: complete volume is represented by slices
• No acceleration techniques like early-ray termination or space leaping
• Rasterization speed and memory access can be problematic

Outlook on more advanced texture-based volume rendering techniques:


• Exploit quite flexible per-fragment operations on modern GPUs (nVidia GeForce 3/4 or
ATI Radeon 8500)
• Post-classification (post-shading) possible
• So-called pre-integration for very high-quality rendering
• Decompression of compressed volumetric data sets on GPU to save texture memory
• Object-space method
• Slice-based technique
• Fast object-order rendering
• Accelerated volume visualization via shear-warp factorization [Lacroute & Levoy 1994]
• Software-based implementation

4 Shear-Warp Factorization
As described in the last chapter, algorithms that use spatial data structures can be divided
into two categories according to the order in which the data structures are traversed:
image-order or object-order.
Image-order algorithms operate by casting rays from each image pixel and processing the
voxels along each ray (e.g. Ray Casting). This processing order has the disadvantage that
the spatial data structure must be traversed once for every ray, resulting in redundant
computation (e.g. Multiple descents of an octree). In contrast, object-order algorithms
operate by splatting voxels onto the image while streaming through the volume data in
storage order. We will deal with this technique more deeply in this chapter. However, this
processing order makes it difficult to implement early ray termination, an effective
optimization in ray-casting algorithms.

Lacroute [Lacroute-1994-FVR] presented an accelerated volume visualization algorithm


via Shear-Warp factorization that combines the advantages of image-order and object-
order algorithms.

The algorithm is software-based and works on the same slice-based technique as texture
based volume rendering. Therefore it also needs 3 stacks of the volume along 3 principal
axes. They are computed in object-space. The general goal is to make viewing rays
parallel to each other and perpendicular to the image, which is achieved by a simple
shear.

Shear along the volume slices.


This figure illustrates the transformation from object space to sheared object space for a parallel projection. The horizontal lines in the figure represent slices of the volume data
viewed in cross-section. After transformation the volume data has been sheared parallel to the set of slices that is most perpendicular to the viewing direction and the viewing rays
are perpendicular to the slices.

Mathematical description of the shear-warp factorization:


Splitting the viewing transformation into separate parts
M view =P⋅S⋅M warp
where
M view = general viewing matrix
P = permutation matrix: transpose the coordinate system in order to make the z-
axis the principal viewing axis. (Note that P stands not for the Projection)
S = transforms volume into sheared object space
M warp =warps sheared object coordinates into image coordinates

Shear S for parallel and perspective projections:


   
1 0 sx 0 1 0 s'x 0
0 1 sy 0 0 1 s' y 0
S par = S persp =
0 0 1 0 0 0 1 0
0 0 0 1 0 0 s'w 1
Algorithm:
1. Transform the volume data to sheared object space by translating
and resampling each slice according to S . For perspective
transformations, also scale each slice. P specifies which of
the three possible slicing directions to use. For resampling
bilinear interpolation is used.
2. Composite the resampled slices together in front-to-back order
using the over operator. This step projects the volume into a 2D
intermediate image in sheared object space. Here we can use
early ray termination to accelerate the algorithm.
3. Transform the intermediate image to image space by warping it
according to M warp This second resampling step produces the
correct final image.

Shear along the volume slices, composite the resampled slices to an intermediate image
and warp it to image space.
In this illustration, several parallel rays are cast through the volume slices in any angle. In the second step, all rays are sheared at the first slice, so they stand perpendicular on it.

Example for one scan line:

After shearing slices, you have to resample them. Then the sample ray hits voxel center
after voxel center and no interpolation is needed anymore.
Most of the work in volume rendering has focused on parallel projections. However,
perspective projections provide additional cues for resolving depth ambiguities.
Perspective projections present a problem because the viewing rays diverge so it is
difficult to sample the volume uniformly.

For a perspective projection each image slice has to be scaled after shearing and
warping.
The Algorithm in a nutshell:
• Shear (3D)
• Project (3D  2D)
• Warp (2D)

The Projection in sheared object space has several geometric properties that simplify the
compositing step of the algorithm:
1. Scan lines of pixels in the intermediate image are parallel to scan lines of voxels in the
volume data.
2. All voxels in a given voxel slice are scaled by the same factor. In different slices by
different factors.
3. Parallel projections only: Every voxel slice has the same scale factor. The scale factor
for parallel projections can be chosen arbitrary. But we choose a unity scale factor so
that for a given voxel scan line there is a one-to-one mapping between voxels and
intermediate image pixels.

Highly optimized algorithm for:


• Parallel projection and
• Fixed opacity transfer function (pre-shading), but if you change the transfer function
interactively, all three slice stacks have to be recomputed.

Optimization of volume data (voxel scan lines):


• Run-length encoding of voxel scan lines
• Skip runs of transparent voxels
• Transparency and opaqueness determined by user-defined opacity threshold

Optimization in intermediate image:


• Skip opaque pixels in intermediate image (analogously to early-ray termination)
• Store (in each pixel) offset to next non-opaque pixel (as shown in graphic below).

Offsets stored with opaque pixels in the intermediate image allow occluded voxels to be
skipped efficiently.
This picture shows a row of 8 pixels. Pixel 1, 4 and 8 are non-opque (white), the others are opaque (black). Each opaque pixel has a pointer stored, that points to the next non-
opaque pixel.
By combining the both ideas of volume data and intermediate image optimization, we can
exploit coherencies and make the algorithm very fast. As first property parallel scan lines
are used for pixels and voxels. That means that voxel scan lines in sheared volume are
aligned with pixel scan lines in intermediate image and both can be traversed in scan line
order simultaneously. By using the run-length encoding of the voxel data to skip voxels
which are transparent and the run-length encoding of the image to skip voxels which are
occluded, we perform work only for voxels which are both non-transparent and visible.

Resampling and compositing are performed by streaming through both the voxels and the
intermediate image in scanline order, skipping over voxels which are transparent and
pixels which are opaque.
Coherence in voxel space:
• Each slice of the volume is only translated
• Fixed weights for bilinear interpolation within voxel slices
• Computation of weights only once per frame

Final warping:
• Works on composited intermediate image
• Warp: affine image warper with bilinear filter
• Often done in hardware: render a quadrilateral with intermediate 2D image being
attached as 2D texture

Parallel projection:
• Efficient reconstruction
• Lookup table for shading
• Lookup table for opacity correction (thickness)
• Three RLE of the actual volume (in x, y, z)

Perspective projection:
• Similar to parallel projection
• Viewing rays diverge in infinity
• Difference: voxels need to be scaled
• Hence more then two voxel scan lines needed for one image scan line
• Many-to-one mapping from voxels to pixels makes it slower than parallel projection.
5 Splatting
Splatting
Splatting is a volume rendering method that distributes volume data values across a
region on the rendered image in terms of a distribution function.
Splatting is an object-order method described by [Westover, 1990], this method
accumulates data points by "throwing" kernels for each voxel to the drawing plane. The
footprints on the drawing plane represent the visualization. The kernel shape and size is
critical for the quality of the result. The accumulation, or compositing process can be done
back-to-front, which guarantees correct visibility. Front-to-back compositing is faster,
because the process can be stopped when pixels are fully opaque. The original algorithm
is fast but poor in quality, however since the first publication many improvements have
been implemented. Note that good splatting results need as much CPU computing time as
ray casting.

The basic idea of splatting is to project each sample (voxel) from the volume onto the
image plane, which can be understood in such a way that each voxel is thrown on a plane
and leaves it's footprint on it. For each thrown voxel we have to compute, which part of the
image plane could be included.

The Splatting approach throws voxels on the image plane.


Illustrated is a 3D cube with 3 on 3 on 3 cells that throws its voxels on a 2D plane 3 on 3 pixels. Each voxel leaves a splat on the image plane.

You can think of the volume (cloud) as each of the voxels is a measurement of a
continuous scalar function. Ideally we want to reconstruct the continuous volume and not
the discrete values. That's why we use the interpolation kernel w which should be
spherically symmetric, because we don't want different results when looking from different
directions. A 3D Gaussian convolution kernel would be a good choice.
The reconstruction can be done through:

f r v=∑ w v−v k  f v k  where


k

f v k  is the density value at sample k and


w v−v k  is the reconstruction kernel such as the Gaussian function.

Analytic integral along a ray r for intensity (emission):


I  p=∫ f r  pr dr=∫∑ w  pr−v k  f v k dr where
k
I  p is the intensity for pixel p,
f r is the scale intensity,
r is the ray and
∑  is the sum of all voxels weighted by the kernel.
k

We can rewrite the formula by switching the summation and integral:


I  p=∑ f v k ⋅∫ w  pr −v k dr
k

with ∫ w  pr−v k dr as splatting kernel (= “splat”) or footprint.

Discretization via 2D splats, where ray is in z-axis:


Splat  x , y=∫ w  x , y , z dz
from the original 3D kernel. Often w is chosen as Gaussian
w  x , y , z =d exp −x 2 /a 2− y 2 /b 2−z 2 /c 2 

The 3D rotationally symmetric filter kernel is integrated to produce a 2D filter kernel

The 2D filter kernel has the highest intensity at its center.


A crop of the 3D gaussian kernel filter looks like a circle. The integration in one dimension gives us the typical symetric 2D gaussian kernel filter that looks like a hat.

• Draw each voxel as a cloud of points (footprint) that spreads the voxel contribution
across multiple pixels
• Footprint: splatted (integrated) kernel
• Approximate the 3D kernel h(x,y,z) extent by a sphere

Approximated 3D convolution kernel.


This image shows the 3D gaussian filter kernel that looks like a sphere.
• Larger footprint increases blurring and is used for high pixel-to-voxel ratio.
• Smaller size of footprint generates gap.
• Footprint geometry:
• Orthographic projection: footprint is independent of the view point.
• Perspective projection: footprint is elliptical.
• The splat footprint, a function of x and y can be pre-computed and stored. Once
this is done, the algorithm just needs to add many of the footprints for projection on
image plane.
• For perspective projection: additional computation of the orientation of the ellipse.

2D convolution kernel.
If the 3D spheric kernel gets projected on a 2D plane by a pespective projection it leaves
an elliptical footprint.
The choice of kernel can affect the quality of the image as you can see in the picture
below. Examples are cone, Gaussian, sinc, and bilinear function.

Example of the 2D Gaussian convolution kernel.


For splatting, each slice of the volume is filled with a field of 3D interpolation kernels, one
kernel at each grid voxel. The reconstruction kernels might overlap and each of them
leaves a 2D footprint on the screen, which is weighted and accumulated into the final
image.
Splatts on the viewing plane.
This image shows 2 overlapping splats on the viewing plane. The footprints on the screen are accumulated by the color of botch splats.

Voxel kernels are added within each 2D sheet of the volume. The weighted footprints of
the sheets are composited in front-to-back order. For projection of all sheets in the sheet-
buffer we use volume slices most parallel to the image plane (analogously to stack of
slices).

Which stack of volume slices is used for projection onto the image plane depends of the
viewing angle.
If the viewing angle is about 30 degrees we choose the stack aligned at the x-axis, if the
angle is about 70 degrees we choose the stack aligned at the y-axis.
Core algorithm for splatting:
Volume:
• Represented by voxels
• Slicing
Image plane:
• Sheet buffer
• Compositing buffer
Add voxel kernels within first sheet to the sheet-buffer.

Transfer sheet buffer to compositing buffer.

Add voxel kernels within second sheet to the sheet-buffer.


Composite sheet buffer with compositing buffer.

Add voxel kernels within third sheet to the sheet-buffer

Composite sheet buffer with compositing buffer.


A problem that leads to inaccurate compositing is when two or more splats overlap. In
these regions we will have an incorrect mixture of integration (3D kernel to 2D splat) and
compositing.
Overlapping kernels lead to aliasing.
Advantages:
• Footprints can be pre-integrated.
• Quite fast: voxel interpolation is in 2D on screen.
• Simple static parallel decomposition.
• Acceleration approach: only relevant voxels must be projected, air voxels for example
can be disregarded.

Disadvantages:
• Blurry images for zoomed views, because Gaussian cuts away high frequencies.
• High fill-rate for zoomed splats.
• Reconstruction and integration to be performed on a per-splat basis.
• Dilemma when splats overlap

Simple extension to volume data without grids:


• Scattered data with kernels.
• Example: SPH (smooth particle hydrodynamics).
• Needs sorting of sample points.

6 Equation of Transfer for Light


All rendering models that have been described before are physically not correct. The goal
is, to find a physical model for volume rendering. There are two main approaches, the
emission-absorption model by Hege [Hege-1993-MMA] and the density-emitter model
by Sabella [Sabella-1988-RAV]. Still there exist other, more general approaches like the
linear transport theory or the equation of transfer for light by Kajiya [Kajiya-1986-TRE]
which is the basis for all rendering methods. The equation tells you what happens to the
radiation of the ray, when it moves through the volume. Some more important aspects are
absorption, emission, scattering and participating mediums like gas, fire or
semitransparent materials.
Two ways to make the equation of transfer simpler:
1. Changes can only appear by interaction with surfaces, if there is no medium (vacuum)
it does not effect the ray and we can use the standard rendering equation. The
radiance resides constant.
2. If there is a medium, we have absorbtion along the ray passing through the volume. But
therefore with the emission-absorbtion model we have no scattering, which means no
change of direction and no computation of secundary rays. In this case we can use the
standard volume rendering equation.
Based on the geometric optics, we select the equation of transfer dependant on the
medium or depandant on scattering. If there is no medium we can choose the standard
rendering equation or more precise the radiosity equation. And if there is no scattering we
can choose the typical volume rendering equation.
The basic idea of the equation of transfer for light is very much in the spirit of the radiosity
equation, simply balancing the energy flows from one point of a surface to another. The
equation states that the transport intensity of light from one surface point to another is
simply the sum of the emitted light and the total light intensity which is scattered toward
x from all other surface points. Further on, the more detailed explanations are based
on the equation model by Hege [Hege-1993-MMA] .

Assumptions:
• Based on a physical model for radiation
• Geometrical optics

Neglect:
• Diffraction
• Interference
• Wave-character
• Polarization

Interaction of light with matter at the macroscopic scale:


• Describes the changes of specific intensity due to absorption, emission, and scattering

Based on energy conservation on microscopic level (not on photon and electron level).

• Basic quantity of light is the radiance, sometimes also called specific intensity I .
Photons reflected on a surface.
Given is a round surface with surface area dA , a reflection vector n with angle  to the surface normal and the solid

angle d .

The equation of radiative energy tells us, if we have a surface element and radiation with
specific direction n with solid angle, what the amount of energy moving through this
area will be. Therefore the equation looks like:

 E= I  x , n , v cos  dA d  d  dt with

E = amount of radiation energy, traveling in time dt with a frequency interval


d  around  through a surface element dA into an element of solid angle d  in
direction n .
I = radiance or specific intensity (6 dimensional: position: 2 dimensions (angles),
direction: 3 dimensions, frequency: 1 dimension).
x = position
n = direction
v = frequency
 = angle between n and the normal on dA
cos  dA = projected area element
d = solid angle
d = frequency
dt = time

There exist 4 responsible effects if there is a difference between incoming and outgoing
radiance at a single position:
• Absorption: specific intensity comes in, less intensity goes out.
• Emission: less energy comes in, but more energy goes out.
• Outscattering: energy scatters out in all directions and can scatter in at another part.
• Inscattering: energy scatters in from other parts.

Contributions to radiation at a single position.


This illustration shows the 4 different contributions to radiation. If it comes to absorbtion, more energy goes into one point than comes out again. Outscattering distributes the
incoming energy in all directions. Emission at one point sets more energy free, as incoming energy. Inscattering collects energy from every direction and sets it free in one specific
direction.
Absorption:
When radiation passes through material, energy is generally removed from the beam. This
loss is described in terms of an extinction coefficient or total absorption coefficient
 x , n , v .
The amount of energy removed from a beam with specific intensity I  x , n ,  , when
passing through a cylindrical volume element of length ds and cross section da , is
given by:
ab
E = x , n , v I  x , n , v ds dA d  dv dt with

 E ab = absorption
 x , n ,  = absorption coefficient
 = scalar value or function

Notice that no cosine term appears in this expression. This is because the absorbing
volume does not depend on the incident angle. The absorption coefficient generally is a
function of x , n and z . The absorption coefficient is measured in units of m−1 .
1
The term is also known as the mean free path of photons of frequency  in

material.

Removal of radiative energy by true absorption (conversion to thermal energy).


Absorption of energy through a cylindrical volume element is proportional to its length.

It is important to distinguish between true or thermal absorption and emission processes,


and the process of scattering. In the former case, energy removed from the beam is
converted into material thermal energy, and energy is emitted into the beam at the
expense of material energy respectively. In contrast, in a scattering process a photon
interacts with a scattering center and emerges from the event moving in a different
direction, in general with a different frequency too. Therefore the total absorption
coefficient consists of two parts:
• True absorption coefficient  x , n , 
• Scattering coefficient   x , n , 

total absorption coefficient; =


The ratio of scattering coefficient and total absorption coefficients albedo. An albedo of 1
means, that there will be no true absorption at all. This is the case of perfect scattering.

Scattering out of solid angle d  .


Emission:
The emission coefficient  x , n ,  is defined in such a way, that the amount of radiant
energy within a frequency interval d  emitted in time dt by a cylindrical volume
element of length ds and cross section dA into a solid angle d  in a direction n
is:

 E em= x , n ,  ds dA d  d  dt

In an analogous way as we did for absobtion, we break the total emission coefficient into
two parts:
• Thermal part or source term q  x , n , 
• Scattering part j  x , n , 

total emission coefficient: =q j

Scattering:
In order to take into account the angle dependence of scattering, we introduce a phase
function p  x , n , n ' ,  ,  '  . The amount of radiant energy scattered from frequency 
to frequency  ' and from direction n to direction n ' , is given by:

1
 E scat = I ds dA d  d  dt ⋅ p  x , n , n ' ,  ,  '  d ' d ' with
4

 I = inscattered amount

1
It should be mentioned that there is no deeper meaning behind the factor . It simply
4
cancels the factor 4  resulting from integrating a unity phase function over the sphere.
Further in a scattering process a photon interacts with a scattering center and emerges
from the event moving in a different direction, in general with a different frequency, too. If
frequency doesn't change, one speaks of elastic scattering.

Scattering can appear, when photons hit a surface (rendering) or if they enter a
participating medium (volume rendering).
This picture shows an example of surface scattering. Incoming energy from one specific direction scatteres on the surface and gets distributed in several directions. The next
example shows volume scattering inside a medium. There the incoming energy gets scattered several times inside the medium, which always results in a change of the reflection
angle. This means the energy can enter the medium at any point and can leave it at any other possible point, depending on how often it scatters.

Derivation of the Equation of Transfer:


The equation of transfer describes the change of specific intensity due to absorption,
emission, and scattering. With all material constants given, the radiation field can be
calculated from this equation. Consider a cylindrical volume element. The difference
between the amount of energy emerging at position xdx and the amount of energy
incident at x must be equal to the difference between the energy created by emission
and the energy removed by absorption. Thus we have:

{I  x , n , − I  xdx , n , }dA d  d  dt (the difference can be one of the four effects)
={− x , n ,  I  x , n ,  x , n , }ds dA d  d  dt with

− x , n ,  = loss of energy


I  x , n ,  = radiance
 x , n ,  = emission

If we write dx=n ds there follows the time-independent equation of transfer. Note that
the emission coefficient in general contains a scattering part and thus an integral over I
:
n⋅∇ I =− I  with

n⋅∇ I = with ⋅ as scalar product and ∇ as package of differentials


− I = losses
 = gains

This means that radiance along a ray in direction n changes due to absorption plus the
emission. In vacuum for example − I =0 and =0 , so we have no absorption and
no emission which means the radiance does not change.

Writing out the equation of transfer:

1
n⋅∇ I =− I q
4
∬   x , n ' ,  '  p  x , n ' , n ,  ' ,  I  x , n ' , ' d ' d ' with

 = true absorption
 = scattering
q = true emission
∬ = we integrate over all incoming scattering photons

Without frequency-dependency and without inelastic scattering:

1
n⋅∇ I =− I q
4
∫   x , n '  p  x , n ' , n I  x , n ' d '

In this case the integro-differential equation is a time-independent equation of transfer.


Integro-differential in general means that the equation contains at least one integral and a
differential part. Here we have ∇ I as differential on the one side and the integration
over I itself from the inscattering part.

Boundary conditions for equation of transfer:


The equation of transfer alone does not describe the radiation field completely, it is only
valid away from surfaces. So we have to specify some boundary conditions to completely
describe what happens. This is also necessary in order to eliminate the constant terms
arising from the integration of the gradient operator in the time-independent equation. In
the most simple case we have explicit boundary conditions they stand for emission of light
at boundaries and are independent of I itself. The other case is called implicit boundary
condition and stands for reflection of light at surfaces. A combination of explicit and
implicit boundary conditions on boundary surface can be written:

I  x , n , =E  x , n ,∬ k  x , n ' , n ,  ' ,  I


inc
 x , n ' , ' d ' d ' with

E  x , n ,  = emission
∬ k  x , n ' , n , ' ,  = surface scattering kernel
I inc   x , n ' , '  = incoming radiation

Integral form of the Equation of Transfer:


Instead of an integro-differential equation we can rewrite the equation of transfer to a pure
integral equation:

n⋅∇ I  x , n ,=− x , n ,  I  x , n , x , n , 

yields

∂ I  x , n , =− x , n ,  I  x , n , x , n , 
∂s

Notice that the operator n⋅∇ is the directional derivative along a line, x= psn with
p being some arbitrary reference point.

Derivative along a line with arbitrary reference point.


The optical depth between 2 points x 1= ps 1 n and x 2= ps 2 n is defined as:

s2

  x 1 , x 2 =∫  ps ' n , n , ds '


s1

1
Recall that is the mean free path of a photon with frequency  . Therefore we can

interpret optical depth as the number of mean free paths between two points x 1 and
x 2 . Further optical depth serves as integrating factor e   x , x  for equation of transfer  0

and thus we can write:

∂ I  x , n , ⋅e   x  0 , x
= x , n ,⋅e
  x 0 , x 

∂s

This equation can be integrated and we obtain:


s

−I  x 0 , n , =∫  x ' , n , e ds ' with


  x 0 , x    x 0 , x ' 
I  x , n , e
s0
s

∫  x ' , n ,  = integral of total emission


s0
  x 0 , x ' 
e = integrating factor
x0 = on the boundary surface

Making use of the fact that optical depth can be decomposed into
  x 0 , x =  x 0 , x '   x ' , x  the final integral form of the equation of transfer can be
written as follows:
s

∫  x ' , n ,e ds ' with


−  x 0 , x  −  x ' , x 
I  x , n , =I  x 0 , n ,e
s0

−  x 0 , x 
I  x 0 , n , e = at any point of the volume there can be emission.
s

∫  x ' , n , e−  x ' , x   = if there is emission at a point in the volume it will be absorbed on
s0
its further way.

This equation can be viewed as the general formal solution of the time-independent
equation of transfer. It states that the intensity of radiation traveling along n at point
x is the sum of photons emitted from all points along the line segment x ' = ps ' n ,
attenuated by the integrated absorptivity of the intervening material, plus an attenuated
contribution from photons entering the boundary surface when it is pierced by that line
segment. Generally the emission coefficient  will contain I itself.
Vacuum condition is an important special case of the transport equation, because there is
no absorption, emission or scattering at all, except on surfaces. Further the frequency
dependence can be ignored and the equation of transfer (inside the volume) is greatly
simplified:

I  x , n=I  x 0 , n

Rays incident on surface at x are traced back to some other surface element at x' :

I inc  x , n ' =I  x ' , n ' 

If we substitute this into the generic boundary condition:

I  x , n , =E  x , n ,∬ k  x , n ' , n ,  ' ,  I inc  x , n ' , ' d ' d '

we obtain the famous rendering equation published by Kajiya [Kajiya-1986-TRE]:

I  x , n=E  x , n∫ k  x , n ' , n I


inc
 x , n ' d ' , x ∈S with
S = surface

Rewriting the element of solid angle d ' into a more familiar form

cos  ' 
d ' = dA '
∣x− x '∣

leads to the standard form via BRDF (bidirectional reflection distribution function):
k  x , n ' , n= f r  x , n ' , ncos i

Special case for most volume rendering approaches:


• Emission-absorption model
• Density-emitter model [Sabella-1988-RAV]
• Volume filled with light-emitting particles
• Particles described by density function

If we ignore scattering in volume rendering equation we are left with the so called
emission-absorption model or density-emitter model. One can imagine the volume to
be filled with small light emitting particles. Rather than being modeled individually, these
particles are described by some density function.
This leads to some Simplifications:
• No scattering
• Emission coefficient consists of source term only: =q (true emission)
• Absorption coefficient consists of true absorption only: =
• No mixing between frequencies (no inelastic effects), so we can ignore any 

Let us consider a ray of light traveling along a direction n , parametrized by a variable


s . The ray enters the volume at position s 0 . Suppressing the argument n , the
equation of transfer in integral form reads:
s

∫ q s ' e−s ' , s ds ' with


−s 0 , s
I s=I s 0 e
s0

I s 0  = incoming radiance is usually given by the boundary conditions.


−s , s
e 0
= absorption
q s '  = emission inside
− s ' , s
e = absorption on the way out

with optical depth


s2

s1 , s 2 =∫ sds


s1

This equation is much simpler because we are only interested in the radiance along the
ray without any scattering. In order to solve this equation one has to discretize along the
ray. There is no need to use equidistant steps, though this is the most common used
procedure. If we divide the range of integration into n intervals , then the specific
intensity at position s k is related to the specific intensity at position s k −1 by the identity.
So the discretization of volume rendering equation can be written:
sk

 ∫ q se ds with
−s k −1 , s k  −s , s k 
I s k = I s k −1 e
s k −1

−s k −1 , s k 
e = attenuation between s k −1 and s k
q s = emission term with q , the color which is emitted
For abbreviation we define two parts:

• Transparency part: k =e−s k −1 , sk 

sk

b k = ∫ q se
−s , sk 
• Emission part:
s k−1

So specific intensity at s n can now be written as:

n n
I s n =I s n−1 n b n −∑ b k ∏  j =I S n−1 ⋅1−n b n
k =0 j=k 1

This formula describes the entire physics of radiation transport, with the sum of all things
that get emitted and the product of all transparencies (this is the over operator from ray
casting). More exact the quantity k is called the transparency of the material between
s k −1 and s k . Transparency is a number between 0 and 1. An infinitely optical depth
 s k −1 , s k  corresponds to a vanishing transparency k =0 . On the other hand a
vanishing optical depth corresponds to full transparency k =1 . Alternatively one often
uses opacity, defined as k =1−k .

7 Compositing Schemes
As a matter of course there exist variations of composition schemes other then the over
operator:
• First
• Average
• Maximum intensity projection
• Accumulate

Illustration of different styles of compositing schemes.


Given is a arbitrarily density function in a coordinate system where the x-axis stands for
depth and the y-axis stands for the intensity.
The composite first hit works as follows: Send a ray into a volume, if the scalar value at a
point is higher then the threshold use this one to extract isosurfaces and render them.
For the first value above a specified threshold is used for rendering.
First hit renderning leads to a simple solid isosurface of the specified value.

Average or linear compositing produces basically an X-ray picture and takes no opacity
into account.

The result of average compositing looks like an X-ray picture.


Average sends a ray through the volume and composits all found values with the same weight.

Maximum Intensity Projection (MIP) uses the greatest value for rendering. It is often used
for magnetic resonance angiograms or to extract vessel structures from an CT or MRT
scan.

MIP emphasises structures with high intensities.


Maximum Intensity Projection takes only the highest values into account.

Accumulate compositing is similar to the over operator and the emission-absorption model
where we have early ray termination. Intensity colors are accumulated until enough
opacity is collected. This makes transparent layers visible (see volume classification).

Accumulation stops if enough opacity is collected.


If one uses accumulation you get a semi-transparent representation of the object.
8 Non-uniform Grids
• Resampling approaches, adaptive mesh refinement (AMR)
• Cell projection for unstructured (tetrahedral) grids
• Shirley-Tuchman [Shirley-1990-PAD]

You might also like