0% found this document useful (0 votes)
20 views9 pages

Efficient and Effective Volume Visualization With Enhanced Isosurface Rendering

Research
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views9 pages

Efficient and Effective Volume Visualization With Enhanced Isosurface Rendering

Research
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Online Submission ID: 0259

Efficient and Effective Volume Visualization with


Enhanced Isosurface Rendering

1 Abstract
2 Compared with full volume rendering, isosurface rendering has
3 several well recognized advantages in efficiency and accuracy.
4 However, standard isosurface rendering has some limitations in ef-
5 fectiveness. First, it uses a monotone colored approach and can
6 only visualize the geometry features of an isosurface. The lack
7 of the capability to illustrate the material property and the inter-
8 nal structures behind an isosurface has been a big limitation of this
9 method in applications. Another limitation of isosurface rendering
is the difficulty to reveal physically meaningful structures, which
arXiv:1202.5360v1 [cs.GR] 24 Feb 2012

10

11 are hidden in one or multiple isosurfaces. As such, the applica-


12 tion requirements of extract and recombine structures of interest
13 can not be implemented effectively with isosurface rendering. In (a) (b)
14 this work, we develop an enhanced isosurface rendering technique
15 to improve the effectiveness while maintaining the performance ef- Figure 1: Comparison of isosurface rendering and full volume ren-
16 ficiency of the standard isosurface rendering. First, an isosurface dering of the same isosurface. (a) Isosurface rendering. (b) full
17 color enhancement method is proposed to illustrate the neighbor- volume rendering.
18 hood density and to reveal some of the internal structures. Second,
19 we extend the structure extraction capability of isosurface render-
20 ing by enabling explicit scene exploration within a 3D-view, us-
54 done by using a transfer-function with a narrow transitional section.
21 ing surface peeling, voxel-selecting, isosurface segmentation, and
55 Visualizations achieved using such rendering setting can provide
22 multi-surface-structure visualization. Our experiments show that
56 considerably more information about the neighborhood density, as
23 the color enhancement not only improves the visual fidelity of the
57 shown in Figure 1. In many applications, it can be significantly ben-
24 rendering, but also reveals the internal structures without signifi-
58 eficial to visualize the internal structures behind an isosurface. For
25 cant increase of the computational cost. Explicit scene exploration
59 example, in CT colonography, clinicians want to identify the na-
26 is also demonstrated as a powerful tool in some application scenar-
60 ture of polyps. But from a monotone colored isosurface rendering,
27 ios, such as displaying multiple abdominal organs.
61 there’s no information of the internal structure for decision mak-
62 ing. To address this requirement, [Pickhardt 2004] conducted a full
28 CR Categories: I.3.3 [Computer Graphics]: Picture/Image 63 volume rendering with a selected window so that the internal struc-
29 Generation—Display algorithms; I.3.7 [Computer Graphics]: 64 ture is revealed. However, additional computation is needed and the
30 Three-Dimensional Graphics and Realism—Color, shading, shad- 65 overall performance will be hurt.
31 owing, and texture
66 Another limitation of the isosurface rendering is the difficulty to
32 Keywords: isosurfaces, volume rendering, transfer-functions, 67 identify structures of interest and to efficiently visualize multiple
33 GPU ray-casting 68 structures of interests in a single view. Usually, an isosurface con-
69 tains many different physically meaningful structures, and the users
34 1 Introduction 70 are interested with a few of them. The rest of the structures, as they
71 are connected as a whole, are not only unnecessary, but also visually
35 Isosurface rendering is one of the canonical techniques used for 72 confusing. To tackle this problem, segmentation can be considered.
36 volume visualization. Compared with other visualization methods, 73 However, direct volume segmentation methods usually do not inte-
37 ray-cast based isosurface rendering has several widely acknowl- 74 grate very well with isosurface rendering, and artifacts are likely to
38 edged advantages in efficiency and accuracy. For example, the ray- 75 be introduced.
39 isosurface intersection test can be very efficiently and accurately 76 In this work, we develop an enhanced isosurface rendering tech-
40 performed at the voxel level for each viewing ray. The empty space 77 nique to improve the effectiveness while maintaining the perfor-
41 skipping techniques can also be intuitively implemented and yield 78 mance efficiency of the standard isosurface rendering. First, we
42 very good performance. In comparison, some other volume render- 79 propose an isosurface color enhancement method to illustrate the
43 ing techniques such as semi-transparent rendering, also called full 80 material property and the internal structures behind the isosurface.
44 volume rendering, needs to take many samples in the volume, and 81 Our approach considers the neighborhood volume directly behind
45 involves intensive calculation. Although empty space skipping is 82 the isosurface, and applies this information to derive the property
46 also possible for the full volume rendering, the performance gain is 83 of local material varying along the isosurface. Since it is a col-
47 relatively limited compared with isosurface rendering. 84 oring/shading method, the voxel-traversal and ray-voxel intersec-
48 The standard isosurface rendering, however, also has some limita- 85 tion scheme is not changed from the original isosurface rendering.
49 tions in effectiveness. First, the isosurface rendering usually assigns 86 Therefore the simplicity of the standard isosurface rendering is kept
50 a monotone color to an isosurface as the material color. Therefore, 87 the same, and computational complexity of the enhanced isosurface
51 only the geometry properties of the isosurface can be visualized 88 rendering is not increased. Second, we enhance the structure extrac-
52 with shading. On the other hand, as we have noticed, full volume 89 tion capability of isosurface rendering by enabling explicit scene ex-
53 rendering is also capable of rendering isosurfaces, which can be 90 ploration within a 3D-view, using surface peeling, voxel-selecting,
91 isosurface segmentation, and multi-structure visualization. During
92 this process, different structures can be extracted from different iso-

1
Online Submission ID: 0259

93 surfaces intuitively. Then, the surface structures are recombined


94 into a single scene for display. The segmentation and recombina-  R 
F ar
95 tion is done within the subset of the voxels that contain the required I = I0 exp − µ (l) dl
0
96 isosurface, so the segmentation result is very suitable to be inte- R F ar  R
l

97 grated with the isosurface rendering scheme. + 0
c (l) µ (l) exp − 0
µ (t) dt dl

98 The remainder of this paper is organized as follow. In Section 2,


99 we introduce the related works in volume rendering. In Section 3, 148 To distinguish different contents in a volume dataset during ren-
100 we provide an overview of our proposed approach. In Section 4, 149 dering, classification and segmentation are two commonly used
101 we describe the model and implementation of the isosurface color 150 technologies. Classification can be done by a variety of differ-
102 enhancement. In Section 5, we illustrate the techniques used in 151 ent transfer-functions, which are different from each other mainly
103 the explicit scene exploration framework. The experiment results 152 in the feature domains where they are defined. While the most
104 are discussed In Section 6. In Section 7, we discuss some related 153 commonly used transfer-function is the intensity base 1D-transfer-
105 issues and the future development of this work. 154 function, there are loads of other transfer-functions which are based
155 on other features such as gradient magnitude [Kniss et al. 2002],
156 size feature [Correa and Ma 2008], and texture feature [Caban and
106 2 Related Work 157 Rheingans 2008]. Although efforts have been made, classification
158 based methods are just partially capable of the content distinguish-
159 ing task. Sometimes, segmentation is necessary. As another inten-
107 2.1 Isosurface Rendering
160 sively studied area, most segmentation methods are not specifically
161 designed for visualization, and the integration of segmentation re-
108 Isosurface rendering has been intensively studied over the years, 162 sults and volume ray-casting has been problematic.
109 and many techniques has been developed which makes isosurface
110 rendering a highly efficient and accurate approach for volume vi- 163 To visualize a segmented volume, different transfer functions can
111 sualization. To identify the intersection of a viewing ray and an 164 be assigned to different segments of the volume. An issue has to be
112 implicitly defined isosurface in a volume, two techniques are in- 165 dealt with is the boundary filtering. On vector GPUs, a solution is
113 volved. One is ray traversal, the other is intersection test. 166 given in [Hadwiger et al. 2003]. However, most recent GPUs use
167 scalar architecture for better programmability, in which case, an-
114 For ray traversal, two typical schemes are uniform sampling and 168 other solution given in [Xiang et al. 2010] is more suitable. In this
115 voxel oriented traversal. Uniform sampling is more widely used in 169 work, instead of volume segmentation, we use isosurface segmenta-
116 GPU based volume ray-casting [Stegmaier et al. 2005], and it is pre- 170 tion, which works on the subset of voxels containing the isosurface.
117 ferred when used along with some algorithms like pre-integration
118 [Engel et al. 2001][Lum et al. 2004]. On the other hand, voxel ori- 171 2.3 Min-Cut Algorithm
119 ented traversal provides an opportunity to do a precise intersection
120 test. In earlier volume ray-casting techniques designed for CPU, 172 Many segmentation issues can be converted into a min-cut question
121 the 3DDDA algorithm [Amanatides and Woo 1987] is often used, 173 in graph theory. The optimization method described by [Boykov
122 such as in [Parker et al. 1998]. 174 and Kolmogorov 2004] provides a very efficient tool to deal with
175 this kind of questions. In our work, we define the isosurface seg-
123 There are also a couple of ways to perform the intersection test. The 176 mentation as a min-cut problem and use this algorithm for the opti-
124 simplest way is to base on a linear approximation between samples. 177 mization.
125 An equivalent method is to use a pre-integration. In voxel oriented
126 ray traversal schemes, if tri-linear interpolation is used, the value 178 3 Overview
127 along the ray segment within a voxel can be considered as a cubic
128 function, the intersection test is converted to a cubic equation prob- 179 In this section, we provide an overview of our approach, as is shown
129 lem, which can be solved in a closed form [Parker et al. 1998] or us- 180 in Figure 2, mainly focusing on the contributions of this work.
130 ing piecewise recursive root finding [Marmitt et al. 2004]. Approx-
131 imate methods can provide a faster but inaccurate result as is given 181 The first contribution, the color enhancement technique, aims to
132 in [Neubauer et al. 2002] and Scharsach2005[Scharsach 2005]. 182 visualize the neighborhood density and reveal the internal structures
183 behind the isosurface. The technique is applied to each of the inter-
133 When GPU is used, tri-linear interpolation is supported by the hard- 184 section points on the isosurface to provide a material color. Our sec-
134 ware. In that case, performance can be further improved by fetching 185 ond contribution, the explicit scene exploration scheme is a com-
135 fewer samples. In [Ament et al. 2010], it is pointed out that as few 186 position of several novel techniques aiming to enhance the struc-
136 as 4 data fetches per voxel is enough for the coefficient extraction. 187 ture extraction capability of isosurface rendering. When provided
137 In this work, we applied a similar strategy for the ray-isosurface 188 with a volume dataset which we have little knowledge, cropping
138 intersection test. 189 and surface peeling can be used for a brief exploration so that the
190 structures of interest can be quickly identified. Then, using the iso-
191 surface segmentation, the structures of interest can be extracted and
139 2.2 Full Volume Rendering 192 represented as isosurface segments. With a slightly modified iso-
193 surface intersection search procedure along with the color enhance-
140 The core concept of full volume rendering is volume rendering in- 194 ment technique, the isosurface segments can be rendered within a
141 tegral, which is based on an optical model described in Max [Max 195 single scene, which we call the multi-surface-structure visualiza-
142 1995] The integration is performed for a viewing ray cast from the 196 tion. During the isosurface segmentation, a few seed voxels are
143 viewpoint, where l = 0, to the far end, where l = F ar. I0 is 197 specified by the user. An intuitive seed selecting interface is de-
144 the light coming from the background, µ is the per-unit-length ex- 198 signed to handle this task, which involves two techniques: voxel
145 tinction coefficient, and c is a intensity or color. In this paper, the 199 selecting and selection based coloring. While in voxel selecting,
146 neighborhood of the isosurface to be rendered is considered in this 200 seed points are collected by user’s mouse clicks and drags on the
147 model. 201 image plane. The selection based coloring provides a preview of

2
Online Submission ID: 0259

Ray-Isosurface Intersection
Ray-Isosurface
Intersection Search

Stop Search/ Voxel ID


Bounding Box Intersection
Seach Next
Color Enhanced
Color Single Isosurface
Enhancement

Cropping Surface Peeling Voxel Selecting


Isosurface
Segments

Seed Voxels
Multi-surface-
Selection Based Isosurface
structure
Coloring Segmentation
Visualization

Seed Preview

Figure 2: The overview of the contributions of our approach. The proposed enhanced isosurface rendering technique is accomplished by the
color enhancement method and the explicit scene exploration scheme. Explicit scene exploration consists of surface peeling, voxel-selecting,
isosurface segmentation, and multi-surface-structure visualization.

202 these seeds. The interaction process works directly in a 3D view,


203 that’s why we call it “explicit scene exploration”. During this pro-
204 cess, the voxel based ray-traversal and ray-isosurface intersection
205 search is an essential part of the system, which yields an accurate
206 ray-isosurface intersection for each ray. The IDs of the voxels con-
Δl
207 taining the intersection points are also collected in this process.

208 4 Color Enhancement


209 Traditional isosurface rendering usually assigns a monotone color
210 to an isosurface as the material color. The reason is that every point
211 belonging to an isosurface has the same scalar value. Since the
212 only scalar value is used for defining the geometry of the structure,
213 there’s no information directly available to define the material of Figure 3: Transfer-function defined by a single transitional section
214 that surface. However, if we consider a small neighborhood directly
215 behind the isosurface, it is possible to get some information of the
216 of local material varying along the isosurface.

217 4.1 Visualization Model

218 Inspired by full volume rendering, we consider the isosurface


219 neighborhood to be semi-transparent. In full volume rendering,
220 such a neighborhood can be defined by a transfer-function in which
221 the opacity transits from 0 to 1 within a narrow range near the iso-
222 value. As shown in Figure 3, such a transitional section can be
223 specified by an isovalue and a local transfer function. The transi-
224 tional section ranges from isovalue to isovalue + ∆v. Scalar val-
225 ues less than isovalue are mapped to fully transparent, and scalar
226 values greater than isovalue + ∆v are mapped to fully opaque.
227 If the neighborhood of the isosurface corresponding to ∆v is suffi-
228 ciently thin, it can be assumed that most of the viewing rays hitting Figure 4: Linear Approximation of the Isovalue Neighborhood. l
229 the isosurface are going through the whole transitional range and denotes the length of ray. v denotes the scalar value.
230 the alpha value finally accumulates to 1. In this case, the accumu-
231 lated color is mainly decided by the thickness of the neighborhood
232 ∆l where the accumulation happens. If ∆l is relatively big, the 236 farther neighborhood will count more.
233 color of the nearer neighborhood plays an important role, and the
234 contribution of the farther neighborhood is relatively small since 237 From the intuitive observation, we consider to use the scalar chang-
235 the opacity accumulates. Reversely, if ∆l is small, the color of the 238 ing rate (or directional derivative) as a local material hint, since ∆l

3
Online Submission ID: 0259

239 can be linearly approximated by dividing the value range ∆v by the


240 directional derivative.
241 However, let’s still begin with ∆l, because it is directly related to
242 the alpha values used for accumulation in full volume rendering. In
243 full volume rendering, if n scalar values are evenly sampled within
244 the isosurface neighborhood of thickness ∆l, the alpha value of a
245 sample used for accumulation can be calculated as:
(a) (b) (c)
∆l/n
Figure 5: Displaying shallow and deep neighborhood with differ-
α = αT stdSampleDistance
ent estimations of the scalar changing rate. (a) Mono colored. (b)
Displaying shallow neighborhood, estimated with the gradient vec-
246 , where αT denotes the alpha value defined in the transfer-function tor. (c) Displaying internal structures.
247 assuming a sample distance stdSampleDistance. With linear ap-
248 proximation, there is:
271 4.3 Estimation of Scalar Changing Rate
∆v
∆l ≈ v0 l (liso )
272 The scalar changing rate v 0 l (liso ) can be estimated in different
273 ways during rendering, which has a significant effect on the ren-
249 , where v 0 l (liso ) denotes the scalar changing rate at the intersection 274 dering result. For basic coloring need, we can simply use ∇v · ~r,
250 point. So: 275 ~ denotes the gradient vector, which will be calculated for
where ∇v
276 shading anyway, and ~r is the direction vector of the viewing ray. In
∆v/n 277 our implementation, ∇v ~ is calculated on the fly with 6 neighbor-
stdSampleDistancev 0 l (liso )
α ≈ αT (1) 278 ing scalar samples. However, using an estimation like this, only a
279 shallow neighborhood behind the isosurface can be reflected by the
251 Now, we define: 280 resulted color, as Figure 5 (b) shows.
281 To reveal the internal structures, scalar changing rate should be es-
∆v
densityF actor = stdSampleDistance 282 timated within a larger scope. To this end, we add in ∆v as another
v 0 l (liso ) global tunable parameter. In scalar changing rate estimation, we
speed = densityF actor
, speed ∈ (0, +∞) 283

284 first perform an iterative search along the viewing ray to find a ∆l
285 where the scalar value reaches isovalue + ∆v. Then the scalar
1
252 In this way, Equation (1) can be writen as: α ≈ αT n·speed 286 changing rate is estimated by ∆l/∆v. In this way, we find that
287 many of the internal structures can be revealed without noticeable
253 From the above formulations, we know that: 288 performance impact. Figure 5 (c) is an example, and more will be
289 presented in the result part.
254 • A value speed can be easily calculated from the scalar
255 changing rate at the intersection point, v 0 l (liso ), with
256 densityF actor as a global tunable parameter. 290 5 Explicit Scene Exploration
257 • Given the value speed, the alpha values used for accumulation 291 An innate advantage of isosurface rendering is that, for each ray,
258 can be approximated. 292 a definite intersection point can be generated. When voxel based
293 ray-traversal is used, the exact voxel can also be identified. This
259 4.2 Speed-color Map 294 feature can be very useful in interaction design. For example, it en-
295 ables the direct picking of voxels from the volume by mouse click-
260 During rendering, we use v 0 l (liso ) to characterize the local mate- 296 ing and dragging within the 2D image plane. By exploiting this
261 rial. To efficiently calculate the material color from the v 0 l (liso ), a 297 advantage, we develop an explicit scene exploration framework to
262 map from speed to color C ~ is built as follow: 298 quickly identify the structures of interest, pick them out, and re-
299 combine them into a single scene.
263 Given a local transfer-function

{αT i , ~ci = (ri , gi , bi ) , i = 1, 2, 3, ..., n} 300 5.1 Accuracy Assurance

1
301 To guarantee the quality of the interaction, such as the voxel select-
264 , for each speed, αspeed,i = αT i n·speed . The accumulated color 302 ing operation, it is crucial to make sure that the ray-isosurface inter-
265 C~ (speed) can be pre-calculated by an alpha blending: 303 section test is accurate for each ray. First, voxel based ray traversal
304 such as the 3DDDA algorithm should be used. Second, the inter-
305 section finding within the voxel should be accurately performed.
n
X i−1
Y
~ (speed) =
C αspeed,i~ci (1 − αspeed,j ) 306 When trilinear interpolation is assumed, the value along the ray seg-
i=1 j=1
307 ment within each voxel can be expressed as a cubic function to the
308 ray parameter. Once the coefficients of the cubic function are de-
309 cided, the intersection test can be transformed into solving a cubic
266 However, since speed has an infinite value range, we cannot 310 equation, which can be efficiently solved using the method pro-
267 simply build a speed-color map linearly. In our implementa- 311 posed by Marmitt et al. [Marmitt et al. 2004]. There are different
268 tion, we use a logarithmsampled speed-color map of size m, 312 ways to decide the coefficients of the cubic function. We choose
~j = C ~ − ln 1 − j/

269 C m , j = 1, 2, 3, · · · , m to store the pre- 313 to use four values along the ray in each voxel to estimate the co-
270 calculated material colors. 314 efficients of the cubic polynomial, as is shown in Figure 6. The

4
Online Submission ID: 0259

p3
p2
p1
p0

Figure 6: Use four values along the ray in each voxel to estimate (a) (b)
the coefficients of the cubic polynomial.
Figure 8: Voxel Selection. (a) Illustration of the process of voxel
selecting. (b) Illustration of a voxel ID buffer.

v1
v0 B

A
v2

v3
(a) (b)
(a) (b)
Figure 9: Isosurface Segmentation. (a) 2D illustration of isosur-
Figure 7: Surface Peeling. (a) Illustration of surface peeling. (b) face segmentation. (b) Use the length of the intersection of isosur-
An example of surface peeling. face and voxel boundary as the weight.

315 viewing ray enters the voxel from p0 and exits from p3. A value 346 for voxel selecting after the rendering. When user clicks or drags
316 is fetched at p0, p1, p2, and p3 each respectively, where p1 and p2 347 the mouse on the image plane, the corresponding voxel ID can be
317 are the trisection points of the ray-voxel intersection. 348 collected and organized. Figure 8 (b) illustrates a voxel ID buffer,
349 where each small color patch is contained in a voxel. The color
318 5.2 Surface Peeling 350 patches are neatly connected together due to accurate ray-isosurface
351 intersection test.
319 When provided with a volume data which we have little knowledge, 352 Voxel selection in a 3D view provides a intuitive and effective in-
320 the first issue of visualization should be to explore the dataset and 353 teraction tool, which we used for the seed selecting in isosurface
321 identify the structures of interest. This has not been a simple issue 354 segmentation.
322 for both full volume rendering and monotone isosurface rendering,
323 due to the fact that many of the structures overlap each other.
355 5.4 Isosurface Segmentation
324 For isosurface rendering, we can use cropping and surface peeling
325 for the brief exploration. Cropping is a simple technique which just 356 For some complicated medical datasets, segmentation is necessary
326 trunk the volume with a bounding box during rendering to localize 357 for volume visualization. Traditionally, volume segmentation meth-
327 the rendering region of the volume. Surface peeling is performed by 358 ods are used as the preprocessing, through which a label volume
328 jumping over the first few intersection points within some selected 359 is generated. Then, rendering is done by using different transfer-
329 image plane areas, so the the structures behind can be exposed. As 360 functions for different volume components. These methods usually
330 is shown in Figure 7 (a), a peeling window is defined on the image 361 have three drawbacks. First, the segmentation is usually performed
331 plane. Within that window, the first ray-isosurface intersection is 362 in slice views or the intensity domain, which is not intuitive. Sec-
332 skipped over, so the second layer of the isosurface is rendered. In 363 ond, the label volume, in which the segmentation result is stored
333 our implementation, we use an integer array as the ”peeling buffer”. 364 has very limited precision. Stair-case artifacts can be frequently
334 The values are initialized as 0. Within each peeling window, the 365 seen in rendering. Third, volume segmentation itself can be very
335 peeling value increases by 1. During rendering, the number of in- 366 complicated and expensive in computation.
336 tersections is counted, if the count is no-bigger than the peeling
337 value, the intersection will be skipped over. Figure 7 (b) gives an 367 In this part, we propose an isosurface domain segmentation method,
338 example of surface peeling, which contains two peeling windows. 368 in which we only consider the voxels containing the specific isosur-
369 face. The idea is that, these voxels can be considered as forming
370 an inter-connected network, using 6-neighborhood, as is shown in
339 5.3 Voxel Selection 371 Figure 9 (a), where it is 4-neighborhood in 2D. So the isosurface
372 segmentation can be considered as a graph segmentation problem.
340 As mentioned above, when ray traversal is done in a voxel-by-voxel
341 manner and the intersections are accurately calculated, the exact in- 373 With the voxel selection method described above, the seed points
342 tersecting voxel can be identified for each ray intersecting the iso- 374 for the foreground and the background can be directly picked from
343 surface. As is shown in Figure 8 (a), a unique ID is assigned to 375 the rendering view. In our implementation, the voxel IDs of the seed
344 each voxel in a sequential manner. Then, a voxel ID buffer is used 376 points are stored as two ”set” structures, which is sorted and opti-
345 to store the intersecting voxel IDs of each ray, which can be used 377 mized for searching. The graph is then constructed by a breadth

5
Online Submission ID: 0259

Table 1: Performance Comparison of Full Volume Rendering


(FVR) and Color Enhanced Isosurface Rendering (CEIR)

Kernel Execution
Dataset Volume Size Image Size
Time per Frame
FVR: 60.6232 ms
Mouse 512 × 512 × 512 765 × 592
CEIR: 4.66121 ms
FVR: 46.4839 ms
Knee 379 × 229 × 305 589 × 488
CEIR: 4.21262 ms
FVR: 34.7136 ms
Head 208 × 256 × 225 713 × 594
CEIR: 4.06191 ms

421 The rendering process is just slightly different from single isosur-
422 face rendering. During ray traversal, the first step is to find a voxel
Figure 10: Multi-surface-structure visualization 423 with a none-zero label. Then, within the voxel, the isovalue corre-
424 sponding to the label value is used for ray-isosurface intersection
425 test, and the corresponding speed-color map is used for color en-
378 first search from both seed sets. In the graph, the segmentation 426 hancement. Since the ray-isosurface intersection test can still be
379 problem can be defined as to find an optimal cut that separate the 427 performed at subvoxel level, the accuracy is preserved and is free
380 two seed sets, which can be solved with the min-cut algorithm after 428 of the stair case artifacts.
381 the weight of each link is decided. For the decision of the weights,
382 it is intuitive to consider the geometrically shortest cut as the opti- 429 6 Results
383 mal solution, since it introduces the smallest damage to the original
384 structure. So the weight should be defined as the length of the in- 430 We conducted several experiments on the proposed techniques on
385 tersection of the isosurface and the voxel boundary, as is shown in 431 a PC equipped with Intel i7 950 CPU, 4GB RAM, and NVIDIA
386 Figure 9 (b), the A-B segment. 432 Geforce GTX 480 graphics card. The rendering computation is im-
433 plemented on the GPU with CUDA.
387 If there is only one segment of intersection of the isosurface and the
388 voxel boundary, the length can be simply calculated by a numerical
389 integration. However, with trilinear interpolation in volume space, 434 6.1 Isosurface Color Enhancement
390 there can be as many as two segments at most, although such case
391 is rare. In that case, usually only one of the two segments is of in- 435 As mentioned before, isosurface color enhancement can be used in
392 terest during segmentation, but there’s no information about which 436 two ways due to different methods of scalar changing rate estima-
393 segment it is. So we treat this case by dividing the total length of 437 tion. Using shallow neighborhood approximation, enhanced iso-
394 the two intersection segments by 2. 438 surface rendering can generate the rendering effect very similar to
439 the full volume rendering that uses a transfer-function with a very
395 During seed selection, the current selection status is previewed with 440 narrow transitional section. As shown in Figure 11, the visual ef-
396 a selection based coloring method. Since the seed sets are not likely 441 fect of color enhanced isosurface rendering is very close to the full
397 to be large, they are directly transfered to the graphics memory 442 volume rendering, while it only takes about 2.6 more milliseconds
398 as sequential lists. During rendering, for each intersection point, 443 compared with monotone isosurface rendering.
399 according the voxel ID information, it can be judged whether the
400 voxel is contained in the two lists by binary searches, and the color 444 The capability to reveal internal structures can also be compared
401 can be decided accordingly. 445 with full volume rendering. Figure 12 shows 3 examples of ren-
446 dering the sample dataset with full volume rendering (FVR) and
402 With isosurface segmentation, the IDs of two sets of voxels are ex- 447 color enhance isosurface rendering (CEIR). Table 1 gives the per-
403 tracted, each containing part of the same isosurface, which we call 448 formance comparison of FVR and CEIR. Using color enhanced iso-
404 surface structures. Such a surface structure is not guaranteed to 449 surface rendering, while the essential structures of interest behind
405 be closed, so it cannot substitute volume segmentation. However, 450 an isosurface can be depicted in a similar way as full volume ren-
406 we will show that they can be recombined and rendered in a sin- 451 dering, the computational cost remains at a very low level.
407 gle scene with different coloring settings, which provides valuable
408 visualization results. 452 6.2 Explicit Scene Exploration

409 5.5 Multi-surface-structure Visualization 453 With the explicit scene exploration, a volume dataset can be ex-
454 plored in an intuitive and efficient way. It includes several tech-
455 niques. Within these techniques, surface peeling enables us to ex-
410 To combine the surface structures into a single scene, a label vol- 456 plore structures deep inside the dataset. It can be applied to any iso-
411 ume of 8-bit integer values, ranging from 0 to 255 is used. While 457 surface rendering view. For example, it can be directly combined
412 the value 0 is used to mark the empty voxels, values from 1 to 255 458 with isosurface color enhancement such as the example given in
413 are corresponding to different surface structures. So at most 255 459 Figure 7 (b).
414 surface structures are allowed to be defined, which is sufficient for
415 most applications. For each surface structure, an isovalue is stored, 460 The other techniques mainly aim at the segmentation and recom-
416 which is the isovalue used in the isosurface segmentation stage to 461 bination of the structures of interest. Figure 13 gives an example
417 extract that surface structure. Also, to enable color enhancement, 462 of seed picking and isosurface segmentation, which demonstrates
418 each surface structure has its own local transfer-function, and the 463 how a tumor is segmented from a brain MRI image. Since the
419 speed-color maps are stored in a 2D look-up-table, where each line 464 segmentation only works on the voxels containing the isosurface,
420 is corresponding to a speed-color map. 465 the segmentation can be done very efficiently. In this example, the

6
Online Submission ID: 0259

(a) (b) (c)

Figure 11: Isosurface Color Enhancement Example. (a) Ordinary monotone isosurface rendering. (b) Full volume rendering. (c) Color
enhanced isosurface rendering. The size of the images is 735x556. Rendering kernel execution times per-frame: (a) 6.0834 ms (b) 21.8971
ms (c) 8.69162 ms

(a) (b)

(a) (b)

Figure 13: Seed picking and isosurface segmentation. (a) Seed


picking. (b) Isosurface segmentation. Volume size: 256×256×124
Number of graph nodes: 832869 Segmentation computation time:
8.25s

(c) (d)

(a) (b)

Figure 14: Seed picking and isosurface segmentation in a cropped


(e) (f) region. (a) Seed picking. (b) Isosurface segmentation. Number of
graph nodes: 40830 Segmentation computation time: 0.36s
Figure 12: Revealing internal structures. The pictures in the left
column are rendered with full volume rendering(FVR) while the
pictures in the right column are rendered with color enhanced iso- 469 gion of interest, the computational cost can be made even smaller.
surface rendering (CEIR). (a)(b) The mouse data. (c)(d) The knee 470 As shown in Figure 14, in the cropped region, the number of graph
data. (e)(f) The head data. 471 nodes is reduced to 40830, and the computing time is reduced to
472 0.36s.
473 Another benefit of isosurface segmentation is that the accurate ray-
466 whole volume contains 256 × 256 × 124 = 8126464 voxels, but 474 isosurface intersection can be calculated during rendering. As a
467 only 832869 nodes are involved in the min-cut optimization, which 475 result, some staircase artifacts, which are usually seen in volume
468 takes 8.25s to accomplish. If cropping is applied to restrict the re- 476 segmentation based rendering, can be avoided. As shown in Fig-

7
Online Submission ID: 0259

Figure 16: Abdominal-structure segmentation and visualization.

493 in reality, and the rendering result has a realistic appearance. How-
494 ever, for the same reason, people can argue that isosurface render-
495 ing might not have sufficient functionalities for visualizing complex
496 volume datasets. Our work shows that such point of view is very
497 likely to be biased. First, its intuitiveness and simplicity are the ad-
498 vantages for better computational performance which is critical in
499 some interactive applications. Second, although it has some limita-
500 tions, much can done to improve them and make isosurface render-
501 ing more effective. In this work, we presented a series of techniques
502 to enhance the effectiveness of the isosurface rendering, and make
503 it a more powerful tool suitable for more application scenarios.
(a) (b)
504 To bring more information into isosurface visualization, we pro-
Figure 15: Visualization accuracy comparison of volume segmen- 505 pose the color enhanced isosurface rendering. Using this method,
tation and isosurface segmentation. (a) Rendering a segmented vol- 506 the color of isosurface is no longer monotone. It can be consid-
ume with linear boundary filtering. (b) Rendering a surface struc- 507 ered that the isosurface is painted with a texture, which is generated
ture produces by isosurface segmentation. 508 according the neighborhood density, in order to reveal the internal
509 structures of the volume data. As such, the representative power of
510 isosurface rendering is very much extended. At the same time, its
511 theoretical groundings can be traced to the full volume rendering.
477 ure 15 (a), if volume segmentation is used, some staircase artifacts
478 are very likely to be introduced even if linear boundary filtering is 512 The explicit scene exploration further extends isosurface rendering
479 applied. These artifacts are not seen in Figure 15 (b), where the 513 by allowing physically meaningful structures to be extracted from
480 isosurface segmentation is used. 514 different isosurfaces. In contrast to direct volume segmentation, the
515 isosurface segmentation method can be more seamlessly integrated
481 In some applications like surgical planning, it is an essential task to
516 to an isosurface rendering system. However, since the extracted sur-
482 extract structures of interests and to analysis the spatial relationship
517 face is not guaranteed to be closed, it can’t be used to replace vol-
483 between them. Our explicit scene exploration provides a powerful
518 ume segmentation. Despite of this, we hope to extend the segmen-
484 tool to deal with these requirements. As shown in Figure 16, multi-
519 tation method to volume segmentation by closing the cuts generated
485 ple organs are segmented and represented as the surface structures,
520 in the segmentation. In this way, the applications of the technique
486 then they are recombined into a single scene for rendering.
521 may be extended to requirements other than visualization, such as
522 automated structure analysis.
487 7 Discussion
523 Acknowledgements
488 Among all volume visualization techniques, isosurface rendering
489 may be the most intuitive and efficient method. It is consistent with
490 the most widely applied optical model used in b-rep graphics, which 524 References
491 considers objects to have a solid silhouette, and the visual features
492 mostly lie on the surface. This model faithfully depicts the objects 525 A MANATIDES , J., AND W OO , A. 1987. A fast voxel traversal

8
Online Submission ID: 0259

526 algorithm for ray tracing. In In Eurographics 87, 3–10. 581 X IANG , D., T IAN , J., YANG , F., YANG , Q., Z HANG , X., L I ,
582 Q., AND L IU , X. 2010. Skeleton cuts-an efficient segmenta-
527 A MENT, M., W EISKOPF, D., AND C ARR , H. 2010. Direct inter- 583 tion method for volume rendering. Visualization and Computer
528 val volume visualization. Visualization and Computer Graphics, 584 Graphics, IEEE Transactions on, 99, 1–1.
529 IEEE Transactions on 16, 6, 1505 –1514.
530 B OYKOV, Y., AND KOLMOGOROV, V. 2004. An experimental
531 comparison of min-cut/max-flow algorithms for energy mini-
532 mization in vision. Pattern Analysis and Machine Intelligence,
533 IEEE Transactions on 26, 9, 1124–1137.
534 C ABAN , J., AND R HEINGANS , P. 2008. Texture-based transfer
535 functions for direct volume rendering. Visualization and Com-
536 puter Graphics, IEEE Transactions on 14, 6, 1364–1371.
537 C ORREA , C., AND M A , K. 2008. Size-based transfer functions: A
538 new volume exploration technique. Visualization and Computer
539 Graphics, IEEE Transactions on 14, 6, 1380–1387.
540 E NGEL , K., K RAUS , M., AND E RTL , T. 2001. High-
541 quality pre-integrated volume rendering using hardware-
542 accelerated pixel shading. In Proceedings of the ACM SIG-
543 GRAPH/EUROGRAPHICS workshop on Graphics hardware,
544 ACM New York, NY, USA, 9–16.
545 H ADWIGER , M., B ERGER , C., AND H AUSER , H. 2003. High-
546 quality two-level volume rendering of segmented data sets on
547 consumer graphics hardware. In Visualization, 2003. VIS 2003.
548 IEEE, IEEE, 301–308.
549 K NISS , J., K INDLMANN , G., AND H ANSEN , C. 2002. Multi-
550 dimensional transfer functions for interactive volume rendering.
551 Visualization and Computer Graphics, IEEE Transactions on 8,
552 3, 270–285.
553 L UM , E., W ILSON , B., AND M A , K. 2004. High-quality lighting
554 and efficient pre-integration for volume rendering. In Proceed-
555 ings Joint Eurographics-IEEE TVCG Symposium on Visualiza-
556 tion 2004 (VisSym04), 25–34.
557 M ARMITT, G., K LEER , A., WALD , I., AND F RIEDRICH , H.
558 2004. Fast and accurate ray-voxel intersection techniques for
559 iso-surface ray tracing. In in Proceedings of Vision, Modeling,
560 and Visualization (VMV), 429–435.
561 M AX , N. 1995. Optical models for direct volume rendering.
562 IEEE Transactions on Visualization and Computer Graphics 1,
563 2 (June), 99–108.
564 N EUBAUER , A., M ROZ , L., H AUSER , H., AND W EGENKITTL , R.
565 2002. Cell-based first-hit ray casting. In Proceedings of the sym-
566 posium on Data Visualisation 2002, Eurographics Association,
567 Aire-la-Ville, Switzerland, Switzerland, VISSYM ’02, 77–ff.
568 PARKER , S., S HIRLEY, P., L IVNAT, Y., H ANSEN , C., AND
569 S LOAN , P.-P. 1998. Interactive ray tracing for isosurface ren-
570 dering. Visualization Conference, IEEE 0, 233.
571 P ICKHARDT, P. 2004. Translucency rendering in 3d endoluminal ct
572 colonography: a useful tool for increasing polyp specificity and
573 decreasing interpretation time. American Journal of Roentgenol-
574 ogy 183, 2, 429–436.
575 S CHARSACH , H. 2005. Advanced gpu raycasting. In the 9th Cen-
576 tral European Seminar on Computer Graphics.
577 S TEGMAIER , S., S TRENGERT, M., K LEIN , T., AND E RTL , T.
578 2005. A simple and flexible volume rendering framework for
579 graphics-hardware-based raycasting. International Workshop on
580 Volume Graphics 0, 187–241.

You might also like