Vector Fields
Vector Fields
Max, Hanrahan and Crawfis [Max90] demonstrate how to Our goal was to render the relationship between turbulent flow
incorporate geometric surfaces into their back-to-front fields and scalar density fields throughout a three-dimensional
compositing of volume polyhedra. This was limited to isocontour volume. This was driven by a requirement to visualize the cause
surfaces, and also required splitting the polyhedron up into several and effect relationship of clouds and winds within global climate
pieces and shipping each individual piece to the volume renderer. models [Potter91]. Global climate modeling produces a time
Lately, projective techniques have been developed that use a history of data, each time step of which needs to be rendered. We
geometric description of the cloud density within each voxel have investigated the use of high frequency textures to represent
vector fields in two-dimensions [Crawfis91]. Here, an anisotropic
texture is derived from the vector field. Time dynamics are then
created by simply regenerating an anisotropic texture at each time
point. Since we recognize frequency, but not phase, in patterns
and textures, a smooth flow is created that provides the illusion of
motion in an animation, without requiring any advection. A
method that does use advection on the same climate data is
described in [Max92]. We have extended our research of two-
dimensional vector filters into three-dimensions, and incorporated
the integration of a scalar field with the compositing of the vector
field to accomplish our goals.
Filter Movement i
A key criterion for good texture generation of the vector field is to Figure 2. Determining Pixel values
avoid patterns caused by the regular movement of the filter, or the
regular spacing of the data. Three options to overcome these within the pixel and what the various attributes of the vector are at
patterns are available. Within the filter kernel, the center point (P that pixel. Assuming circular pixels, the area of overlap with a
in Figure 2) through which the vector passes is randomly chosen thick line segment can be estimated by taking the absolute value
(Figure 3a). The major reduction in patterns comes from
of the dot product of the vector, np , perpendicular to u with the
controlling the movement of the entire filter. The filter is moved
with random jitters in its increment to prevent the regular spacing vector from the center point to the current pixel (Figure 2). This
apparent from the clipped edges of the vectors (Figure 3b) gives us the perpendicular distance from the axis of the vector to
Finally, the filter is moved in increments smaller than its width the pixel. The function:
(Figure 3c). This allows vectors to overlap, and blurs the extent of 1. 0 r r1
each individual vector. While the differences between these three
images may not be substantial, when we animate the images, very
f(r) ar + b r1 r r2
different results appear. What we are after is the illusion that the
particles or the texture as a whole is moving, not individual flags
0. 0 r r2
can then be used to produce smooth anti-aliased lines. The values
waving in the wind. Whether all of these jitterings are necessary
of r1 and r2 control the thickness of line segments, and are
has yet to be determined, however none of them require specified by the user. These anti-aliased lines work better and are
additional resources of any significance. These jitterings were
computationally easier than using cylinders.
developed for two-dimensional filters and side or top views of
three-dimensional data sets. Oblique views in three-dimensions The area is used as an opacity and color scaling factor in
will naturally break up regular patterns to some extent. compositing the vector into the image. Several controls over the
representation of the vectors are available. Depending on the
Vector Kernels kernel, an arbitrary color mapping scheme is offered. Current
kernels will map the either the world z-coordinates or the screen
Vectors are represented on the image as line segments with z-coordinates (z-height) to a color in a user specified color table.
varying color and opacity. As the filter moves along the output The z-height can include both the relative position in the data set
image, a vector kernel performs three tasks: determine the and the height increase of the vector across the filter. By heavily
projection of the vector, calculate the color and opacities of the weighting this latter term, color can be mapped to show the
vector line segment and the scalar function, and composite this vertical velocity component. Other color mapping schemes such
information into the image. The vector is projected onto the as those proposed by Van Gelder and Wilhelms [VanGelder92],
viewing plane by taking the dot product of the vector with two could easily be incorporated. This color is used as the base color
basis vectors defining the viewing plane: or hue. By desaturating one end of the vector, we can add an
indication of the signed direction of the vector (i.e., a vector
ux V i
head). Here, if we simply take the dot product of the projected
→ → vector and the vector to the pixel center, we will get a measure of
uy V j where we are along the axis of the vector. Since we are only
concerned with the pixels along the vector axis at this point, we
where the projected vector, u→ (ux uy) T can use this measure directly.
This projected vector, u , is then normalized for use in future Finally, we adjust the vector's intensity by its magnitude. A depth
operations. cue can also be applied by adjusting the intensity based on the
linear distance from the view point.
Once we have the projection of the vector onto the screen, we
then need to determine for each pixel what fraction of the
vector lies
Figure 3. Vector Kernel Movement Effects: a) Jittering of the center point, b) Jittering of the stride length, c) Overlapping strides.
discrepancies in the renderer are not noticeable for the test cases (u)du (u)du
we have run, we choose to ignore them. This implies that a e 0 (t)e dz dt
perfectly acceptable solution would be to simply splat in a vector, dz
and then splat in the volume over it for each kernel instantiation. dz t
dz dv
A more accurate solution is described in the next section. (u)du (u)du 1 (u)du
dz
The addition of scalar splatting increases the computational time
e 0 e
dz dv dt
of the vector kernel substantially. The reason for this is twofold: (t)e
dzdv
where ( x) and ( x) represent the scalar density and vector
1) The pixels outside the projected vector must now be density distributions along the ray, and ( x)the total density
calculated and composited in. distribution.
2) Kernel calculations were skipped if the vector length If we assume an infinitesimal thickness in the vector, and give it a
was less than some user specified tolerance. These fixed opacity, , and color, I , then the equation simplifies
must now be drawn if the scalar field contributes to to:
the image (i.e., the scalar field is greater than a
certain threshold).
t
dz (u)du
0 dt Efficiency Considerations
I (t)e
0 At least three possible tests can be used to reduce computations
dz and thereby improve the efficiency. The first is on the length of
Ie (u)du the vector. If the magnitude of the vector falls below a certain
0
threshold, then the calculations needed to render it can be skipped.
dz t With this comes the second test, on the maximum contribution of
(u)du (u)du the splat. If the opacity of the scalar field falls below some
1
threshold, then the calculations to render it can be skipped.
(1 )e 0
(t)e dz dt
Finally, the biggest win comes when both the above conditions
dz
are true. In this case, the entire kernel can be skipped.
The value dz can be calculated for each pixel ray from the plane
consisting of the transformed vector and one of the basis vectors The size of the resulting image, the span of the filter, and the
defining the viewing plane. By using the analytical integration
stride of the filter all have an effect on the performance of the
proposed by Max, Hanrahan and Crawfis [Max90], this filter. Smaller images and filter size and larger strides can
calculation requires only two exponential evaluations, or one
improve the performance of the filter. The resolution of the
additional exponential over the straight volume rendering. image's z-space also has a significant impact on the performance
of the filter. All of these variables are specified under user
The color times depth approximation, C*D , proposed by
control.
Wilhelms and Van Gelder [Wilhelms91], can be used to further
simplify this equation. Here, three colors and opacities are
The simplicity of a filter makes it a natural choice for
computed for the vector, in front of the vector, and in back of the
vectorization and parallel processing. Each pixel within the filter
vector and composited together. If Is and s are the color and requires the same arithmetic, allowing it to be computed in
opacity per unit length for the scalar field, the equivalent equation parallel on even a SIMD machine. For the 2D filter or a top down
for the simplified C*D calculation is: view with the 3D filter, several instantiations of the filter kernel
can also operate in parallel.
I Isdz (1 dzs )Iv (1 dzs )(1 v )Is (1 dz)
Finally, since the filter samples both the vector and the scalar
While this follows logically, it does not produce the desired field, large amounts of memory may be necessary to maintain this
result. Consider the case where a ray just grazes the edge of an data. However, the filter does process this data in a fairly
antialiased vector, such that v is almost zero. The cumulative sequential order.
intensity is then:
Results
I Isdz (1 dzs )Is (1 dz)
Figures 5 and 6 are taken from an HDTV animation presented at
but, the intensity of a neighboring pixel which does not intersect the SIGGRAPH '92 Film and Video show. Figure 5 illustrates the
the vector is simply Is . Thus for the C*D integration calculation, direct volume rendering of just the wind velocities, while Figure 6
the formulas: illustrates the wind velocities and the percent cloudiness. All of
this data was calculated from a global climate model with grid
dimensions of 320 by 160 by 19. Figure 5 required 30 seconds to
I Isdz (1 dzs )Iv (1 v )Is (1 dz)
generate on a SGI Personal IRIS at NTSC resolution. Figure 6
sdz (1 dzs )v (1 v )s (1 dz) required one minute. The simulated data consists of clouds and
winds at every hour for ten days. Each day of the simulation
should be used. These are then composited into the image. generates 380Mb of data for the wind and percent cloudiness
fields. Figure 7 shows an oblique view of a test function,
simulating a tornado. Figure 8 illustrates the electric field around
an within a small portion of a Boeing 737 jet, the avionics' bay.
Future Work
The above techniques provide an effective solution to the
simultaneous display of a single scalar field and a single vector
field. This allows the scientists to study the complex relationships
between the winds and the clouds or the winds and a specific
dz dv atmospheric heating term. However, the scientists still need to
understand the complex dynamics between the winds and several
scalar variables (i.e., percent cloudiness, incoming and outgoing
radiation, percent humidity, etc.). This is a general research topic
to be addressed in not only the vector domain, but the scalar
n domain as well.
Finally, the technique outlined here does not take into account the
overlap of the filters when drawing the vectors. This involves a
trade-off decision between these inaccuracies and the complexity
associated with keeping vectors consistent across splat or voxel
domains.