0% found this document useful (0 votes)
25 views5 pages

Vector Fields

This document discusses techniques for visualizing 3D vector fields by representing the vector flow as textures. It proposes using a filter to sweep through volume data in back-to-front order, depositing anti-aliased lines based on vector orientations. Randomizing the filter's position and increments prevents regular patterns. This creates the illusion of dynamic flow without computationally expensive advection calculations. The goal is to integrate scalar and vector fields for visualizing cause-effect relationships in climate modeling data.

Uploaded by

Disha Khurana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views5 pages

Vector Fields

This document discusses techniques for visualizing 3D vector fields by representing the vector flow as textures. It proposes using a filter to sweep through volume data in back-to-front order, depositing anti-aliased lines based on vector orientations. Randomizing the filter's position and increments prevents regular patterns. This creates the illusion of dynamic flow without computationally expensive advection calculations. The goal is to integrate scalar and vector fields for visualizing cause-effect relationships in climate modeling data.

Uploaded by

Disha Khurana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Volume [Westover90], [Wilhelms91], [Laur91].

While these techniques


are geometrically based, they require passing the polygons to the
Visualization of geometry pipeline in a back-to-front order.
Three-
Research into the display of three-dimensional flow has also been
Dimensional explored over the past few years. Various algorithms to represent
Vector Fields the flow via ribbons have been developed. Helman and Hesselink
[Helman91] and Globus et. al. [Globus91] have developed
algorithms to display critical points within the flow field. These
Introduction algorithms and the standard vector or hedgehog plots have no
direct way of being combined with the direct volume visualization
The rendering of three-dimensional scalar fields has received methods recently developed.
much attention over the past several years. These 3D scalar fields
can be represented using either isocontour surface reconstruction Particle systems [Reeves85] can be used to represent both scalar
algorithms, or as semi-transparent density clouds. With isocontour ([Max90], [Sabella88]) and vector fields ([van Wijk91], [van
surfaces, intermediate geometry is produced and processed using Wijk90]). Vector fields require an advection of the particles for
the normal geometric pipelines developed during the last few each time step, and usually involve creating and deleting particles
decades. Thus, additional geometric objects such as axes, vectors as time progresses. Unfortunately, these algorithms are quite
or additional isocontours (from possibly different scalar fields) computationally intensive, and the number of particles required
can be easily added to the display of the isocontour. for this purpose would be prohibitive. Kajiya [Kajiya89] also
made allusions to representing vector fields using algorithms
Unfortunately, this is not the case for volume density clouds. developed for the display of hairy surfaces. This technique is
Shirley and Neeman [Shirley89] and Levoy [Levoy89] discuss the limited to the display of the vector flow upon a surface and is very
integration of separate geometric objects using raytracing. The computationally expensive, requiring several hours of CPU time
sorting required at each sample point makes this algorithm per image. Spot noise [van Wijk91] offers an interesting
infeasible for a large number of geometric objects, such as that technique for visualizing flow fields, but is again limited to the
produced by the display of many tiny vectors. flow over a surface. and is also computationally expensive.

Max, Hanrahan and Crawfis [Max90] demonstrate how to Our goal was to render the relationship between turbulent flow
incorporate geometric surfaces into their back-to-front fields and scalar density fields throughout a three-dimensional
compositing of volume polyhedra. This was limited to isocontour volume. This was driven by a requirement to visualize the cause
surfaces, and also required splitting the polyhedron up into several and effect relationship of clouds and winds within global climate
pieces and shipping each individual piece to the volume renderer. models [Potter91]. Global climate modeling produces a time
Lately, projective techniques have been developed that use a history of data, each time step of which needs to be rendered. We
geometric description of the cloud density within each voxel have investigated the use of high frequency textures to represent
vector fields in two-dimensions [Crawfis91]. Here, an anisotropic
texture is derived from the vector field. Time dynamics are then
created by simply regenerating an anisotropic texture at each time
point. Since we recognize frequency, but not phase, in patterns
and textures, a smooth flow is created that provides the illusion of
motion in an animation, without requiring any advection. A
method that does use advection on the same climate data is
described in [Max92]. We have extended our research of two-
dimensional vector filters into three-dimensions, and incorporated
the integration of a scalar field with the compositing of the vector
field to accomplish our goals.

Our technique for representing vector fields is to create a very


fine-grained texture representation of the flow. Individual vectors,
insignificant individually, combine to form a useful picture of the
overall flow of the field. We have developed a filter which can be
used to sweep through a volume image in back-to-front order. The
kernel of this filter can be used to represent both scalar and vector
quantities for two- and three-dimensional data sets. The basic
algorithm to render a two-dimensional vector field passes a vector
V
kernel filter across the resultant image. The kernel deposits an
anti-aliased line over the width of its domain. Each individual line u
is composited in using the OR operator, and its orientation is
P
based on a sampling of the vector field. By carefully controlling
the movement of the filter, highly dynamic flow fields can be j np
represented (Figure 5).

Filter Movement i
A key criterion for good texture generation of the vector field is to Figure 2. Determining Pixel values
avoid patterns caused by the regular movement of the filter, or the
regular spacing of the data. Three options to overcome these within the pixel and what the various attributes of the vector are at
patterns are available. Within the filter kernel, the center point (P that pixel. Assuming circular pixels, the area of overlap with a
in Figure 2) through which the vector passes is randomly chosen thick line segment can be estimated by taking the absolute value
(Figure 3a). The major reduction in patterns comes from
of the dot product of the vector, np , perpendicular to u with the
controlling the movement of the entire filter. The filter is moved
with random jitters in its increment to prevent the regular spacing vector from the center point to the current pixel (Figure 2). This
apparent from the clipped edges of the vectors (Figure 3b) gives us the perpendicular distance from the axis of the vector to
Finally, the filter is moved in increments smaller than its width the pixel. The function:
(Figure 3c). This allows vectors to overlap, and blurs the extent of  1. 0 r  r1
each individual vector. While the differences between these three 
images may not be substantial, when we animate the images, very
f(r)  ar + b r1  r  r2

different results appear. What we are after is the illusion that the 
particles or the texture as a whole is moving, not individual flags
 0. 0 r  r2
can then be used to produce smooth anti-aliased lines. The values
waving in the wind. Whether all of these jitterings are necessary
of r1 and r2 control the thickness of line segments, and are
has yet to be determined, however none of them require specified by the user. These anti-aliased lines work better and are
additional resources of any significance. These jitterings were
computationally easier than using cylinders.
developed for two-dimensional filters and side or top views of
three-dimensional data sets. Oblique views in three-dimensions The area is used as an opacity and color scaling factor in
will naturally break up regular patterns to some extent. compositing the vector into the image. Several controls over the
representation of the vectors are available. Depending on the
Vector Kernels kernel, an arbitrary color mapping scheme is offered. Current
kernels will map the either the world z-coordinates or the screen
Vectors are represented on the image as line segments with z-coordinates (z-height) to a color in a user specified color table.
varying color and opacity. As the filter moves along the output The z-height can include both the relative position in the data set
image, a vector kernel performs three tasks: determine the and the height increase of the vector across the filter. By heavily
projection of the vector, calculate the color and opacities of the weighting this latter term, color can be mapped to show the
vector line segment and the scalar function, and composite this vertical velocity component. Other color mapping schemes such
information into the image. The vector is projected onto the as those proposed by Van Gelder and Wilhelms [VanGelder92],
viewing plane by taking the dot product of the vector with two could easily be incorporated. This color is used as the base color
basis vectors defining the viewing plane: or hue. By desaturating one end of the vector, we can add an
indication of the signed direction of the vector (i.e., a vector
ux  V  i
head). Here, if we simply take the dot product of the projected
→ → vector and the vector to the pixel center, we will get a measure of
uy  V  j where we are along the axis of the vector. Since we are only
concerned with the pixels along the vector axis at this point, we
where the projected vector, u→ (ux uy) T can use this measure directly.

This projected vector, u , is then normalized for use in future Finally, we adjust the vector's intensity by its magnitude. A depth
operations. cue can also be applied by adjusting the intensity based on the
linear distance from the view point.
Once we have the projection of the vector onto the screen, we
then need to determine for each pixel what fraction of the
vector lies
Figure 3. Vector Kernel Movement Effects: a) Jittering of the center point, b) Jittering of the stride length, c) Overlapping strides.

Scalar Rendering Compositing


For overlapping filters, a splatting-like algorithm works well. The Once we have the contribution due to the vector field and the
ideal reconstruction filter described by Max [Max91] is used as contribution due to the scalar field at each sampled voxel, we can
the basis for the splat. This filter: calculate the total contribution to each pixel. Consider a ray, of
 unit length, from the eye passing through a polyhedron within
 2 which we wish to render a vector (Figure 4). The segment of this
1 r ray passing through the polyhedron is broken up into three parts:
 st 0rs
that segment in front of the vector, the segment passing through
2 the vector, and the segment behind the vector. Let dz represent the
 (t  r)
length of the front segment. If the vector has a thickness, dv, then
g(r)   srt the segments have lengths dz, dv, and (1-dz-dv). If we then
t(t  s) assume a homogenous opacity and color, the intensity can be
 0. 0 rt calculated using the equation:
 t
 1  (u)du
with s = 0.48 and t = 1.37 for a filter stride of one, has a finite I  (t)e
0 dt
span, allowing the size of the kernel to be arbitrarily large. This 0
allows long vectors, while limiting the effect of the splat to the t
stride taken in the filter. This does present the problem that the dz  (u)du
vector drawn with the splat is only affected by that splat and not dt
neighboring splats that may overlap the vector. Overcoming this
  (t)e 0

would require complicated neighborhood tests, and the handling 0


of multiple vector segments within the kernel. Since these  dz dzdv t

discrepancies in the renderer are not noticeable for the test cases (u)du   (u)du
we have run, we choose to ignore them. This implies that a e 0  (t)e dz dt
perfectly acceptable solution would be to simply splat in a vector, dz
and then splat in the volume over it for each kernel instantiation. dz t
dz dv
A more accurate solution is described in the next section.   (u)du  (u)du 1   (u)du
 dz
The addition of scalar splatting increases the computational time
e 0 e
 dz dv dt
of the vector kernel substantially. The reason for this is twofold: (t)e
dzdv
where ( x) and ( x) represent the scalar density and vector
1) The pixels outside the projected vector must now be density distributions along the ray, and ( x)the total density
calculated and composited in. distribution.

2) Kernel calculations were skipped if the vector length If we assume an infinitesimal thickness in the vector, and give it a
was less than some user specified tolerance. These fixed opacity,   , and color, I , then the equation simplifies
must now be drawn if the scalar field contributes to to:
the image (i.e., the scalar field is greater than a
certain threshold).
t
dz  (u)du
0 dt Efficiency Considerations
I  (t)e
0 At least three possible tests can be used to reduce computations
 dz and thereby improve the efficiency. The first is on the length of

 Ie  (u)du the vector. If the magnitude of the vector falls below a certain
0
threshold, then the calculations needed to render it can be skipped.
dz t With this comes the second test, on the maximum contribution of
  (u)du   (u)du the splat. If the opacity of the scalar field falls below some
1
threshold, then the calculations to render it can be skipped.
(1    )e 0
 (t)e dz dt
Finally, the biggest win comes when both the above conditions
dz
are true. In this case, the entire kernel can be skipped.
The value dz can be calculated for each pixel ray from the plane
consisting of the transformed vector and one of the basis vectors The size of the resulting image, the span of the filter, and the
defining the viewing plane. By using the analytical integration
stride of the filter all have an effect on the performance of the
proposed by Max, Hanrahan and Crawfis [Max90], this filter. Smaller images and filter size and larger strides can
calculation requires only two exponential evaluations, or one
improve the performance of the filter. The resolution of the
additional exponential over the straight volume rendering. image's z-space also has a significant impact on the performance
of the filter. All of these variables are specified under user
The color times depth approximation, C*D , proposed by
control.
Wilhelms and Van Gelder [Wilhelms91], can be used to further
simplify this equation. Here, three colors and opacities are
The simplicity of a filter makes it a natural choice for
computed for the vector, in front of the vector, and in back of the
vectorization and parallel processing. Each pixel within the filter
vector and composited together. If Is and s are the color and requires the same arithmetic, allowing it to be computed in
opacity per unit length for the scalar field, the equivalent equation parallel on even a SIMD machine. For the 2D filter or a top down
for the simplified C*D calculation is: view with the 3D filter, several instantiations of the filter kernel
can also operate in parallel.
I  Isdz  (1  dzs )Iv  (1  dzs )(1  v )Is (1  dz)
Finally, since the filter samples both the vector and the scalar
While this follows logically, it does not produce the desired field, large amounts of memory may be necessary to maintain this
result. Consider the case where a ray just grazes the edge of an data. However, the filter does process this data in a fairly
antialiased vector, such that v is almost zero. The cumulative sequential order.
intensity is then:
Results
I  Isdz  (1  dzs )Is (1  dz)
Figures 5 and 6 are taken from an HDTV animation presented at
but, the intensity of a neighboring pixel which does not intersect the SIGGRAPH '92 Film and Video show. Figure 5 illustrates the
the vector is simply Is . Thus for the C*D integration calculation, direct volume rendering of just the wind velocities, while Figure 6
the formulas: illustrates the wind velocities and the percent cloudiness. All of
this data was calculated from a global climate model with grid
dimensions of 320 by 160 by 19. Figure 5 required 30 seconds to
I  Isdz  (1  dzs )Iv  (1  v )Is (1  dz)
generate on a SGI Personal IRIS at NTSC resolution. Figure 6
  sdz  (1  dzs )v  (1  v )s (1  dz) required one minute. The simulated data consists of clouds and
winds at every hour for ten days. Each day of the simulation
should be used. These are then composited into the image. generates 380Mb of data for the wind and percent cloudiness
fields. Figure 7 shows an oblique view of a test function,
simulating a tornado. Figure 8 illustrates the electric field around
an within a small portion of a Boeing 737 jet, the avionics' bay.

Future Work
The above techniques provide an effective solution to the
simultaneous display of a single scalar field and a single vector
field. This allows the scientists to study the complex relationships
between the winds and the clouds or the winds and a specific
dz dv atmospheric heating term. However, the scientists still need to
understand the complex dynamics between the winds and several
scalar variables (i.e., percent cloudiness, incoming and outgoing
radiation, percent humidity, etc.). This is a general research topic
to be addressed in not only the vector domain, but the scalar
n domain as well.

We have simplified the problem here by flattening the terrain in


the climate models and dealing only with a regular grid. In fact
Figure 4. Integrating along the viewing direction.
global climate models and many other grand challenge problems
deal with irregular topologies which must be dealt with.

We have also concentrated our attention on the techniques and


representations, rather than on efficient solutions. While the
techniques are fairly efficient, improvements must still be made to
achieve interactive levels. The use of table lookups as described
by Laur and Hanrahan [Laur91], and Westover [Westover90] and
the use of Gouraud shaded or hardware texture mapped polygons
should be evaluated.

Finally, the technique outlined here does not take into account the
overlap of the filters when drawing the vectors. This involves a
trade-off decision between these inaccuracies and the complexity
associated with keeping vectors consistent across splat or voxel
domains.

You might also like